In Chapter 1, I used as a paradigm context in which a strong rule is needed a judge's decision to enforce a bank's right to foreclosure despite unfortunate consequences in the individual case. The rule was supported by consideration of the cumulative effects on financial institutions and the practice of mortgage loans of judges acting on direct moral perceptions in such cases. I also suggested there that this sort of case could be generalized to a blanket duty of judges to defer to clear legal requirements even when they morally disagree with the outcomes of applying law in particular cases. Once again, while individual decisions on grounds of direct moral perceptions have minimal effects on the legal and political systems, the cumulative effects of allowing such decisions when opposed by law would nullify democratic institutions and must be avoided. Finally, I noted there that this fundamental moral rule for judges does not imply that the legal requirements to which they are to defer must themselves take the form of genuine rules.
Our questions for this chapter concern the extent to which legal norms do and should take this form. It is widely assumed, perhaps in contrast to the private moral sphere, that the need for uniformity and predictability in the legal system, and for limiting the power and discretion of those entrusted to enforce the law, means that legal requirements must typically be cast in the form of genuine rules.
OUTLINE OF THE TASK
My main task in this first chapter is to determine when rules are required for moral reasoning and when they are not, when, indeed, they are better dispensed with. The mark of a genuine rule is universal prescription. Such rules tell us what to do, or at least how to reason, in all cases to which they apply. Genuine moral rules connect natural or nonmoral properties universally to moral prescriptions. They tell us what to do whenever certain situations occur, situations that can be identified without moral reasoning.
Such rules can be broad or narrow. Their scope or range of application is determined by the extensions of the terms in which they are stated. “Don't torture kittens” applies to all young cats and orders us not to inflict severe pain on them. If we know the meaning of the nonmoral term “kittens,” and if the term “torture” is used here to mean the infliction of severe pain (nonmoral terms), then we know what the rule unambiguously tells us to refrain from doing in all situations involving kittens.
Some expressions that appear to be rules, whose statements are universal or seem to apply to all things of a stated kind, can be reduced in practice (as they are actually used) to expressions without universal terms. Such expressions are not really universally prescriptive, although their form would suggest that they are.
A reader might have predicted that this final chapter would be titled “Practical Reasoning without Rules.” But there are good reasons for focusing on moral reasoning here. As hinted in Section II of Chapter 2, I do not believe that much prudential reasoning actually occurs, at least not of the complex and interesting variety that will be the target of explication here. When faced with conflicting self-interested motivations and pondering what to do, we typically simply summon the various considerations before our minds and await recognition of their relative weights. At least this is what we do after critically informing our desires, if necessary, as to their origins and the consequences of acting on them. This description is similar to the particularist's account of moral reasoning, which I will challenge. Moral reasoning is instead similar in structure to legal reasoning, although the databases from which these two types of reasoning proceed are different. Not being a lawyer, I will concentrate on moral reasoning here for my examples.
In Chapter 1, I argued that ordinary moral reasoning does not consist in deducing particular prescriptions from rules. Rules cannot capture our ordinary moral judgments. The reason emphasized there was the complex ways that numerous morally relevant factors interact in various contexts, reversing priorities among them and sometimes their positive or negative values as well. There are no sufficient conditions that tell us, for example, when not lying is more important than not causing harm and when not causing harm is more important.
When we think of rules for conduct, we naturally think first of the moral or legal domains, of interpersonal rules, the topics of the previous and next chapters. But individuals sometimes adopt rules for their own conduct for prudential reasons, and I will address the need for such rules in this chapter. Robert Nozick has argued that intrapersonal or prudential rules play an essential role in persons' lives. His argument is reminiscent in significant ways of the strongest argument for interpersonal rules. But there are important disanalogies that will be brought to light in this chapter. They relate mainly to the ability to adopt strategies to optimize, as opposed to settling for the second-best strategy of genuine rules. In view of these disanalogies between the intrapersonal and interpersonal cases, I will again reach a mostly skeptical conclusion regarding the need for genuine prudential rules, this time with only one exception.
I will then consider two special cases of higher-level prudential rules that have been widely assumed to be sound, and I will again question this assumption. The first is a rule to optimize, or maximize, the satisfaction of our rational preferences. I will clarify the sense in which this indeed defines prudential rationality, despite recent denials in the literature, as well as the sense in which it is not an acceptable action-guiding rule. The second higher-level rule to be considered is a prudential requirement to be moral.
“A rule's a rule!” How many of us have been infuriated by hearing these words from some bureaucrat across a desk or government counter? How many have thought such an attempt at justification more appropriate for a robot than a human being? How many take this pat response as a cue to request a supervisor or higher-up who can look through and beyond the rules? On the other side, how many of us have resented administrators who took it upon themselves to ignore rules and make exceptions that we thought unjustified? How many have condemned those who adopt or acknowledge rules only to ignore them later or consider themselves above them? Can both of these reactions be right? That the former reaction is more common begins to indicate that the application of rules is not the norm in sound moral reasoning, at least in difficult or controversial situations. That the latter response also occurs begins to indicate that there are nevertheless circumstances in which agents ought to obey rules even when they regret the outcomes and believe they could do better. The question we face here is “Which circumstances?”
Most philosophers have remained oblivious to this mundane phenomenology. Most, despite the warnings of Aristotle, and perhaps influenced by such hoary texts as the ten commandments, by suspect interpretations of Kant and naive versions of utilitarianism, or by a once dominant picture of theory construction in science, assume that moral reasoning always consists in the application of rules to particular cases.
Email your librarian or administrator to recommend adding this to your organisation's collection.