To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is a commonplace that much of contemporary metaphysics is deeply bound up with the metaphysical modalities: metaphysical possibility and necessity. To take one central instance, the mind-body problem, in its most familiar contemporary form, appears as a problem about property identities, and it is hard to imagine discussing any issue about property identity without calling on the idea of metaphysical possibility. If we want to ask whether the property of being conscious, or being in pain, or having this sort of pain S, is identical with some physical or functional property P – say, the property of having such-and-such neurons firing in such-and-such a way – we typically begin by asking whether I could have had these neurons firing in this particular way, without experiencing S. And the could here is the could of metaphysical possibility.
As we all know, these questions about what could be the case – metaphysically could – are far from easy to answer. There are, it seems to me, two features of the notion of metaphysical possibility that combine to make them hard to settle, either negatively or positively. What makes them hard to settle negatively is that because metaphysical possibility is supposed to be a kind of possibility distinct from physical possibility, styles of argument that work very well to show that various describable situations are not physically possible do not carry over to show that the same situations are not metaphysically possible. Most of us would agree that the standard correlations between brain and pain already give us excellent reasons for believing that it is not physically possible for there to be a perfect neurological duplicate of me who feels no pain at the dentist's.
This chapter is an attempt to understand the content of and motivation for a popular form of physicalism, which I call nonreductive physicalism. Nonreductive physicalism claims that although the mind is physical (in some sense), mental properties are nonetheless not identical to (or reducible to) physical properties. This suggests that mental properties are, in earlier terminology, emergent properties of physical entities. Yet many nonreductive physicalists have denied this. In what follows, I examine their denial, and I argue that on a plausible understanding of what emergent means, the denial is indefensible: nonreductive physicalism is committed to mental properties being emergent properties. It follows that the problems for emergentism – especially the problems of mental causation – are also problems for nonreductive physicalism, and they are problems for the same reason.
The structure of the chapter is as follows. In the first section, I outline what I take to be essential to nonreductive physicalism. In the second section I attempt to clarify what is meant by emergent, and I argue that the notion of emergence is best understood in terms of the idea of emergent properties having causal powers that are independent of the causal powers of the objects from which they emerge. This idea, ‘downward causation,’ is examined in the third section. In the final section I draw the lessons of this discussion for the contemporary debate on the mind-body problem.
When we think of rules for conduct, we naturally think first of the moral or legal domains, of interpersonal rules, the topics of the previous and next chapters. But individuals sometimes adopt rules for their own conduct for prudential reasons, and I will address the need for such rules in this chapter. Robert Nozick has argued that intrapersonal or prudential rules play an essential role in persons' lives. His argument is reminiscent in significant ways of the strongest argument for interpersonal rules. But there are important disanalogies that will be brought to light in this chapter. They relate mainly to the ability to adopt strategies to optimize, as opposed to settling for the second-best strategy of genuine rules. In view of these disanalogies between the intrapersonal and interpersonal cases, I will again reach a mostly skeptical conclusion regarding the need for genuine prudential rules, this time with only one exception.
I will then consider two special cases of higher-level prudential rules that have been widely assumed to be sound, and I will again question this assumption. The first is a rule to optimize, or maximize, the satisfaction of our rational preferences. I will clarify the sense in which this indeed defines prudential rationality, despite recent denials in the literature, as well as the sense in which it is not an acceptable action-guiding rule. The second higher-level rule to be considered is a prudential requirement to be moral.
A reader might have predicted that this final chapter would be titled “Practical Reasoning without Rules.” But there are good reasons for focusing on moral reasoning here. As hinted in Section II of Chapter 2, I do not believe that much prudential reasoning actually occurs, at least not of the complex and interesting variety that will be the target of explication here. When faced with conflicting self-interested motivations and pondering what to do, we typically simply summon the various considerations before our minds and await recognition of their relative weights. At least this is what we do after critically informing our desires, if necessary, as to their origins and the consequences of acting on them. This description is similar to the particularist's account of moral reasoning, which I will challenge. Moral reasoning is instead similar in structure to legal reasoning, although the databases from which these two types of reasoning proceed are different. Not being a lawyer, I will concentrate on moral reasoning here for my examples.
In Chapter 1, I argued that ordinary moral reasoning does not consist in deducing particular prescriptions from rules. Rules cannot capture our ordinary moral judgments. The reason emphasized there was the complex ways that numerous morally relevant factors interact in various contexts, reversing priorities among them and sometimes their positive or negative values as well. There are no sufficient conditions that tell us, for example, when not lying is more important than not causing harm and when not causing harm is more important.
My main task in this first chapter is to determine when rules are required for moral reasoning and when they are not, when, indeed, they are better dispensed with. The mark of a genuine rule is universal prescription. Such rules tell us what to do, or at least how to reason, in all cases to which they apply. Genuine moral rules connect natural or nonmoral properties universally to moral prescriptions. They tell us what to do whenever certain situations occur, situations that can be identified without moral reasoning.
Such rules can be broad or narrow. Their scope or range of application is determined by the extensions of the terms in which they are stated. “Don't torture kittens” applies to all young cats and orders us not to inflict severe pain on them. If we know the meaning of the nonmoral term “kittens,” and if the term “torture” is used here to mean the infliction of severe pain (nonmoral terms), then we know what the rule unambiguously tells us to refrain from doing in all situations involving kittens.
Some expressions that appear to be rules, whose statements are universal or seem to apply to all things of a stated kind, can be reduced in practice (as they are actually used) to expressions without universal terms. Such expressions are not really universally prescriptive, although their form would suggest that they are.
In Chapter 1, I used as a paradigm context in which a strong rule is needed a judge's decision to enforce a bank's right to foreclosure despite unfortunate consequences in the individual case. The rule was supported by consideration of the cumulative effects on financial institutions and the practice of mortgage loans of judges acting on direct moral perceptions in such cases. I also suggested there that this sort of case could be generalized to a blanket duty of judges to defer to clear legal requirements even when they morally disagree with the outcomes of applying law in particular cases. Once again, while individual decisions on grounds of direct moral perceptions have minimal effects on the legal and political systems, the cumulative effects of allowing such decisions when opposed by law would nullify democratic institutions and must be avoided. Finally, I noted there that this fundamental moral rule for judges does not imply that the legal requirements to which they are to defer must themselves take the form of genuine rules.
Our questions for this chapter concern the extent to which legal norms do and should take this form. It is widely assumed, perhaps in contrast to the private moral sphere, that the need for uniformity and predictability in the legal system, and for limiting the power and discretion of those entrusted to enforce the law, means that legal requirements must typically be cast in the form of genuine rules.
“A rule's a rule!” How many of us have been infuriated by hearing these words from some bureaucrat across a desk or government counter? How many have thought such an attempt at justification more appropriate for a robot than a human being? How many take this pat response as a cue to request a supervisor or higher-up who can look through and beyond the rules? On the other side, how many of us have resented administrators who took it upon themselves to ignore rules and make exceptions that we thought unjustified? How many have condemned those who adopt or acknowledge rules only to ignore them later or consider themselves above them? Can both of these reactions be right? That the former reaction is more common begins to indicate that the application of rules is not the norm in sound moral reasoning, at least in difficult or controversial situations. That the latter response also occurs begins to indicate that there are nevertheless circumstances in which agents ought to obey rules even when they regret the outcomes and believe they could do better. The question we face here is “Which circumstances?”
Most philosophers have remained oblivious to this mundane phenomenology. Most, despite the warnings of Aristotle, and perhaps influenced by such hoary texts as the ten commandments, by suspect interpretations of Kant and naive versions of utilitarianism, or by a once dominant picture of theory construction in science, assume that moral reasoning always consists in the application of rules to particular cases.
If you intend to do something, does your intention constitute a reason for you to do that thing? To put the question briefly: Are intentions reasons? Many philosophers have argued they are, but in this essay I shall argue they are not.
First thoughts are on my side. The view that intentions are reasons is implausible. If you have no reason to do something, it is implausible that you can give yourself a reason just by forming the intention of doing it. How could you create a reason for yourself out of nothing? Suppose, say, that you have no reason either for or against doing some act, and you happen to decide to do it. Now you intend to do it. So now, if intentions are reasons, you have a reason to do it. Since you have no contrary reason not to do it, the balance of reasons is in favour of your doing it. You now actually ought to do it, therefore. But this is implausible. It is implausible that just deciding to do something can make it the case that you ought to do it, when previously that was not the case.
I shall call this ‘the bootstrapping objection’, in honour of Michael Bratman, who raises it in his Intention, Plans, and Practical Reason. The objection is that you cannot bootstrap a reason into existence from nowhere, just by a forming an intention.
Take an example. Suppose you are wondering whether or not to visit Paris, but have not yet made up your mind.
Some, but not all, of our behavior deserves to be called ‘action’. We distinguish, among our doings in a broad sense, a special class of performances that are both doings and (therefore) ours in a richer and more demanding sense. Action is behavior that is rational, in the sense that the question of what reasons can be given for actions is always, at least in principle, in order. Actions are performances that are caught up in our practices of giving and asking for reasons as moves for which reasons can be proffered and sought. Although there may be much more to the concept of action than is captured in this characterization, the connection between action and reasons is sufficiently tight that one cannot count as understanding the concept of action (as even minimally mastering the use of that and cognate words) unless one also counts as in the same sense understanding the concept of reasons (for action).
One specifies a potential reason for an action by associating with the performance a goal or an end: a kind of state of affairs at which one understands it as aiming, in the sense that its success or failure is to be assessed accordingly as it does or does not bring about a state of affairs of that kind. Because this is the form of reasons for action, actions, as essentially performances for which reasons can be offered or demanded, are also essentially performances whose success or failure can be assessed.
This essay is an inquiry into the workings of two concepts, practical disposition and social practice, as they enter, or might enter, into moral philosophy. Or rather, it is a fragment of such an inquiry. Its point of departure is a pair of familiar tendencies in moral philosophy, the tendencies we meet with in what might be called dispositional accounts of the rationality of morality and practice versions of utilitarianism. Of course, the concepts of a practice and a disposition enter into other types of moral theory, some of them perhaps intuitively more attractive than either of these. But the deployment of our concepts in these two lines of thought is, I think, uniquely clear and intelligible. A study of their workings here can therefore be expected to supply a general elucidation of the two concepts as they are properly understood in practical philosophy and thus potentially in quite different types of normative theory.
The concepts practice and disposition appear at first sight to be quite diverse: One looks to be a concept proper to social theory, the other perhaps to psychology. A comparative treatment will, I hope, tend to burn off this dross of associated ideas and reveal an underlying kinship at least in their specifically practical-philosophical use. But further, recognition of this kinship will in turn put us into contact with a more extensive class of practical concepts and with it a larger logical, metaphysical, and normative topic, namely, the role of a certain kind of generality in practice and practical thought.
Assume that human beings must cooperate to survive, and must do so extensively to flourish. Activities that require a variety of links between our actions are essential to the full range of human life. It follows that humans need to be able to describe and categorize actions and to attribute to one another motives: belief, desire, character. They need a psychology.
Cooperation without psychology is possible for other species, with hardwired social routines that tell them when to share, when to defer, and when to punish. We are innately social, but we do not have a fixed repertoire of social acts with fixed instructions about when to perform them. Instead, we have inescapable desires for company, affection, and attention from others and an inbuilt tendency to think out courses of action in terms of the relations we and others have to common features of the environment. That is our evolutionary niche: to operate in groups, but to think our way through the problems groups face. (For psychological and evolutionary evidence for this diagnosis, see Chapters 8 and 9 of Byrne [1995] and the first three chapters of Baron-Cohen [1995].) Each person thinks what to do, but must do so strategically, taking account of the decision-making of others. Strategic thinking is impossible without concepts to represent the paths of reasoning that lead from motives to acts and outcomes. (It need not use the concepts of “reasoning,” “motive,” “act,” “outcome,” and their friends, but it must use concepts that represent reasoning, motive, act, and outcome.) So it needs psychology.
Liberal contractarian political and moral theory, based in accounts of practical rationality, sets itself the task of identifying a reasonable set of conditions under which individual persons, conceived as having no necessary tie to one another, can form orderly societies for the sake of mutual benefit. Clearly, if individuals have no necessary tie with one another, there is no reason to expect that they will protect each other's interests. Nor is there any reason to suppose that one individual or group will gladly sacrifice its private good for the common weal, or for the well-being of another. And clearly, we cannot expect to have a stable, well-ordered society if each of us is prepared to do anything in her power to get what she wants, no matter what the cost to her fellows.
While it could turn out that every potential citizen in the desired common-wealth would just happen to be a lovely person who never would want a thing that would jeopardize others' interests, there is no reason to expect this happy turn of events, and, anyway, the aim of contractarian theory is to determine what rational social life would be like no matter how good-natured or ill-tempered the members of the society might be. The question for contractarian theory becomes, What kind of concessions should individuals make for the sake of cooperative social life?
The traditional theory of rational choice begins with a series of simple and compelling ideas. One acts rationally insofar as one acts effectively to achieve one's ends given one's beliefs. In order to do so, those ends and beliefs must satisfy certain simple and intuitively plausible conditions: For instance, the rational agent's ends must be ordered in a ranking that is both complete and transitive, and his or her beliefs must assign probabilities to states of affairs relevant to the achievement of those ends. The requirement of completeness ensures that all alternatives will be comparable; the transitivity condition ensures that at least one alternative will be ranked ahead of the others in each situation. If the completeness condition is violated, the agent will not always be able to compare alternatives and consequently to make a choice. If transitivity is violated, a situation may arise in which the agent will be unable to achieve his or her ends because for any alternative there will be another that will be preferred to it. On the belief side, there are similar requirements of completeness and consistency: An incomplete ordering of beliefs might recommend no action at all, and inconsistent beliefs might recommend incompatible courses of action. Provided that the constraints are satisfied, whenever the opportunity to make a decision presents itself, the rational agent will choose the course of action that will be most likely to achieve his or her ends.