Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-18T23:59:31.137Z Has data issue: false hasContentIssue false

True and Useful: On the Structure of a Two Level Normative Theory

Published online by Cambridge University Press:  22 May 2012

FRED FELDMAN*
Affiliation:
University of Massachusetts at Amherstffeldman@philos.umass.edu

Abstract

Act-utilitarianism and other theories in normative ethics confront the implementability problem: normal human agents, with normal human epistemic abilities, lack the information needed to use those theories directly for the selection of actions. Two Level Theories have been offered in reply. The theoretical level component states alleged necessary and sufficient conditions for moral rightness. That component is supposed to be true, but is not intended for practical use. It gives an account of objective obligation. The practical level component is offered as an implementable system for the choice of actions by agents lacking some relevant information. It gives an account of subjective obligation. Several different ways of developing Two Levelism are explained and criticized. Five conditions that must be satisfied if the practical level principle is to be a good match for a given theoretical level principle are stated. A better form of Two Levelism is presented.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 ‘There is therefore much truth in the description of the right act as a fortunate act. If we cannot be certain that it is right, it is our good fortune if the act we do is the right act’ (Ross, W. D., The Right and the Good (Oxford, 1930), p. 31Google Scholar).

2 Others have used other terminology here. Some have described the first thing as the principle of ‘objective obligation’ and the second thing as the principle of ‘subjective obligation’. Hare used the terms ‘critical level principle’ and ‘intuitive level principle’ in a closely related way. I have no objection to any terminology here. After all, they are just names.

3 I have defended a variant in which right acts are said to maximize ‘desert adjusted’ utility. For details, see Feldman, Fred, ‘Adjusting Utility for Justice’, Philosophy and Phenomenological Research 55 (1995), pp. 567–85CrossRefGoogle Scholar.

4 The classic discussion of this point can be found in Moore's Principia Ethica (Cambridge, 1903, rev. edn., 1993), pp. 211–14 (sect. 99). See also Frazier, Robert, ‘Act Utilitarianism and Decision Procedures’, Utilitas 6 (1994), pp. 4353CrossRefGoogle Scholar. Bales, R. E., ‘Act-Utilitarianism: Account of Right-Making Characteristics or Decision-Making Procedure?’, American Philosophical Quarterly 8 (1971), pp. 257–65Google Scholar, contains a nice discussion of how the implementability problem arises for AU. Section 21 of Parfit's, DerekOn What Matters (Oxford, 2011)Google Scholar is devoted to a discussion of some closely related points.

5 Smith, Hollystates six ‘criteria of adequacy’ for an account of subjective rightness in her ‘Subjective Rightness’, Social Philosophy and Policy 27 (2010), pp. 64110, at 72–3CrossRefGoogle Scholar. The interested reader is encouraged to study and compare her list with the one given here.

6 Of course I can easily figure out, in an unhelpful way, what I should do. I can always say that I should maximize utility, or ‘do whatever would be best’. But these recommendations are practically unhelpful; I cannot directly make use of the recommendation. If given such a recommendation, I would of course agree, but then I would have to ask for further help: ‘But which of my alternatives in fact is the one that would maximize utility?’

7 There are several passages in Mason, E., ‘Consequentialism and the “Ought Implies Can” Principle’, American Philosophical Quarterly 40 (2003), pp. 319–31Google Scholar, in which Mason suggests that she means to defend an answer along these lines. In one place she says that when you don't know what you should do, you should try to maximize utility. She goes on to say: ‘An agent counts as trying to maximize utility when she does what she believes will maximize utility’ (p. 324).

8 Suppose we say that the agent's alternatives are ‘doing what will be best’ and ‘doing something that will be less than the best’. Then there is a pointless way in which he does know what he should do; he should do what will be best. But that act description is unhelpful; the agent does not know in practical terms what he is supposed to do in order to do what would be best.

9 There are passages in Mason, ‘Consequentialism’, in which she seems to endorse this approach, too. See, for example, p. 323 where she says: ‘whenever we are given an instruction like [maximize the good], we ought to figure out which course of action is most likely to fulfill the instruction, and pursue that course of action’.

10 This example was introduced in Jackson, Frank, ‘Decision-Theoretic Consequentialism and the Nearest and Dearest Objection’, Ethics 101 (1991), pp. 461–82, at 462–3CrossRefGoogle Scholar. Donald Regan described a case that illustrates the same point in Regan, D., Utilitarianism and Co-operation (Oxford, 1980), pp. 264–5CrossRefGoogle Scholar. Parfit's example involving the trapped miners in Parfit, On What Matters, sect. 21, seems to illustrate the same point.

11 Smith, ‘Subjective Rightness’, pp. 100–6.

12 This feature may be what motivated Smith to seek practical level principles that have outstandingly good success rates, rather than to seek practical level principles that have perfect success rates. It's also important to note that in the Dr Jill case just discussed, we expect the practical level principle to recommend a course of action (giving A) that is not recommended by the theoretical level principle. So we can't expect perfect extensional equivalence.

13 It would be interesting to compare this distinction to Hare's distinction between critical level thinking and intuitive level thinking in his formulation of a Two Level view in Hare, R. M., Moral Thinking: Its Levels, Method, and Point (Oxford, 1981)CrossRefGoogle Scholar.

14 It may appear that we can say that you have a subjective obligation to do something if you would have an objective obligation to do it if the world were precisely as you believe it to be. Parfit's concept of wrongness in the belief-relative sense as discussed in section 21 of his On What Matters seems to be like this. But this is problematic. Suppose your beliefs about the morally relevant features of your alternatives are ‘gappy’ – for some features, F, you neither believe that the situation has F nor that it lacks F. The world could not objectively be ‘gappy’ like that. Furthermore, it's hard to see how anything could be objectively obligatory in a situation in which the relevant alternatives were gappy with respect to a significant number of the properties that make for moral obligation.

15 I recognize that my statement of this condition is vague. In her discussion of her Criterion 4, Holly Smith is similarly vague. Like me, she sees some connection between subjective obligation and blameworthiness, but avoids committing herself to any fully precise principle. She says that the concept of subjective rightness ‘should bear appropriate relationships to assessments of whether the agent is blameworthy or praiseworthy for her act’ (Smith, Subjective Rightness, p. 73).

16 In Zimmerman, M., ‘Is Moral Obligation Objective or Subjective?’, Utilitas 18 (2006), pp. 329–61, at 329CrossRefGoogle Scholar, Michael Zimmerman says: ‘conscientiousness precludes doing what one believes to be overall morally wrong’. I think this is not quite right. Dr Jill might believe that giving A is overall wrong; that giving B or C is overall right; but still she might think that under the circumstances it will be permissible for her to give A. If she refuses to do something she takes to be wrong, she runs the risk of doing something terrible. So, even though she is conscientious, she knowingly does something she takes to be overall morally wrong.

17 This is not to suggest that blame of all sorts will be avoided. It's easy to imagine cases in which a person deserves blame for being ignorant of certain important morally relevant facts. At an earlier time, we may suppose, he could have obtained the relevant information. Now it is unavailable. As a result he cannot determine his obligation1, and falls back on a practical level principle. He does what is obligatory2. The condition then says that the agent cannot be blamed for doing the obligatory2 act; it allows that he might be blameworthy for failing to have learned the morally relevant facts at the earlier time.

18 I say that giving A ‘probably’ maximizes expected utility in this case. We can't tell for sure. Whether it does depends upon the details of the probabilities and values of the alternatives. In this example I have deliberately stipulated that Dr Jill does not have any beliefs about the precise numerical ratings either for the values or for the probabilities of the outcomes.

19 One of the most blatant violations of the helpfulness condition occurs in a popular proposal involving act-utilitarianism. Some assume that act-utilitarianism gives a fair account of our obligations1. They go on to suggest that when you don't know what maximizes actual utility, and hence do not know what you ought1 to do, you ought2 to do what maximizes expected utility. As I argued in ‘Actual Utility, the Objection from Impracticality, and the Move to Expected Utility’, Philosophical Studies 129 (2006), pp. 49–79, while it is hard to know what maximizes regular utility, it is even harder to know what maximizes expected utility. As a result, telling a person that he ought2 to do whatever maximizes expected utility will often be unhelpful; so this sort of answer violates the helpfulness condition.

20 Thus I am not defending a version of the Ideal Observer Theory (as typically understood). The Utilitarian Moral Guide understands AU; he is calm and methodical. But he does not have access to any factual information beyond that available to the perplexed agent.

21 Note that Step One does not require the agent to list all of her alternatives. There might be millions of them. In many cases it will be sufficient for the agent to consider whole groups of alternatives under suitable general descriptions. For example, suppose an agent has been asked to pick a number between one and one million. There is no need for her to consider picking one, picking two, picking three, etc. Since she will have no epistemic trouble in any case, she can describe her alternatives as a group by saying ‘I have to pick a number between one and one million.’ This will be sufficiently action guiding.

22 Step Two does not require that the agent consider each individual alternative separately. In the numbers case imagined in the previous footnote, she might simply consider that there is no number, n, such that her evidence gives her reason to suppose that picking n will yield more utility than picking any other number.

23 Ross, The Right and the Good, p. 41.

24 My formulation of Virtue Ethics derives from things that Doviak, Daniel says in his ‘A New Form of Agent-Based Virtue Ethics’, Ethical Theory and Moral Practice 14.3 (2011), pp. 259–72CrossRefGoogle Scholar.

25 I am grateful to several friends for helpful criticism and suggestions. I especially want to thank Kristian Olsen, Owen McLeod, Casey Knight, Chris Meacham, Pete Graham, Brad Skow and Michael Zimmerman. Papers by Holly Smith, Elinor Mason and Michael Zimmerman on this topic have also been very helpful. An ancestor of this article was presented as the Keynote Address at the annual meeting of the Creighton Club at Hobart and William Smith College in Geneva NY on 13 November 2010. Another version was defended at a meeting of the New England Consequentialism Workshop at the Safra Center of Harvard University on 9 February 2011. Another version was presented at Huron University College in London Ontario on 28 October 2011. This last version was also presented as the Keynote Address at the November 2011 meeting of the New Jersey Regional Philosophical Association. I am honoured to have had the invitations and I am very grateful to the participants in those discussions for many helpful comments.