In Chapter 17 of his magnificent Reasons and Persons, Derek Parfit asks what he describes as an ‘awesome question’: ‘How many people should there ever be?’ For a utilitarian like me, the answer seems simple: there should be however many people it takes to make the world best. Unfortunately, if I answer Parfit's awesome question in this way, I may sink myself in a quagmire of axiological confusion. In this paper, I first describe certain aspects of the quagmire. Then I introduce and explain some of the elements of a novel axiological view – ‘justicism’. Justicism is derived from some ideas originally suggested by Franz Brentano. It was developed in an effort to solve certain other problems confronting utilitarianism – problems explicitly about justice. I think, however, that as a sort of happy by-product, justicism also generates a plausible answer to Parfit's awesome question. This may come as a bit of surprise, since justicism is a form of totalism, and it is widely thought that no totalistic theory can provide a satisfactory answer to Parfit's question. After presenting and explaining my proposed solution, I address some objections.
1 Parfit, Derek, Reasons and Persons, Oxford, 1984, p. 381. Parenthetical page refer ences in what follows are to Parfit's book.
2 I try to show how this can be done in my ‘Adjusting Utility for Justice: A Consequentialist Reply to the Objections from Justice’, Philosophy and Phenomenological Research, lv (1995), forthcoming.
3 A number of controversial assumptions are implicit in T. Some of these concern the arithmetic of intrinsic value. For example, that the intrinsic value of a life can be represented by a number; that interpersonal comparisons are possible; that the difference between a good life and a bad life corresponds to the difference between a positive number and a negative one; etc. Furthermore, I am assuming that all the recipients of value at a world will be people. On many standard axiologies, dogs, cats, ponies, and other sentient beings may also be recipients of value. I make this assumption here primarily to simplify the puzzle. In a fuller account, it would be rejected. Another assumption behind T is that talk of the ‘sum’ of the values of the lives at a world will make sense. If there are infinitely many lives at a world, simple summation will not work. Thus, what I say here must be restricted to cases in which there are finitely many lives at each world. For a discussion of some possible ways to extend this sort of treatment to other cases, see Vallentyne, Peter and Kagan, Shelly, ‘Infinite Value and Finitely Additive Value Theory’, unpublished as of 03 1995.
4 Imagine that a fifth god is present at the discussion. This one has other strengths and weaknesses. His weakness is that he can produce worlds with populations of at most one person. But he has a compensating strength. He is capable of making these people enjoy huge amounts of utility. Suppose he speaks up: ‘I can create a world in which there is just one person, but I can see to it that that one person enjoys slightly more than a billion billion units of whatever makes life worth living. So I can create a world far better than A, and slightly better than Z.’ Again, if we calculate the values of worlds by appeal to the impersonal total principle, it would appear that this fifth god is right. His world would be the best of the worlds so far considered. Again, this seems preposterous. A world containing just one lonely ‘utility monster’ does not appear to be supremely good.
5 Parfit, , p. 388. In his dissertation, ‘Ethical Theory and Population’, University of Massachusetts, forthcoming, Kevin Moon points out that all the essential features of the puzzle of the Repugnant Conclusion appear in McTaggart, J. M. E., The Nature of Existence, Cambridge, 1927. McTaggart describes a pair of lives relevantly like those lived in A and Z, and points out that if the low quality life lasts sufficiently long, it would contain a total amount of value greater than that contained in the high quality life. He goes on to say that ‘This conclusion would, I believe, be repugnant to certain moralists’ (pp. 452–3, emphasis added).
6 Parfit discusses averagism in various places in chs. 18 and 19. He focuses especially on averagism in § 143, ‘Why We Ought to Reject the Average Principle’.
7 Hurka, T. M. discussed several forms of averagism in ‘Average Utilitarianisms’, Analysis, xlii (1982), 65–9, then discussed some more forms in ‘More Average Utilitarianisms’, Analysis, xlii (1982), 115–19.
8 See, for example, Hurka, Thomas, ‘Value and Population Size’, Ethics, xciii (1983), 496–507. Hurka's view was critically discussed by Hudson, James L., ‘The Diminishing Marginal Value of Happy People’, Philosophical Studies, li (1987), 123–37.
9 For a critical review of an important collection of papers on the Repugnant Conclusion and related issues, see McMahan, Jefferson, ‘Problems of Population Theory’, Ethics, xcii (1981), 96–127; McMahan, 's article is a review of Obligations to Future Generations, ed. Sikora, R. I. and Barry, Brian, Philadelphia, 1978. A much more comprehensive and up-to-date review of the literature can be found in Moon, 's ‘Ethical Theory and Population’.
10 Brentano, Franz, ‘Loving and Hating’, The Origin of Our Knowledge of Right and Wrong, ed. Kraus, Oskar, English edn. ed. Chisholm, Roderick, trans. Chisholm, Roderick and Schneewind, Elizabeth, London, 1969. Although Brentano does not cite the passage in which Aristotle draws the distinction, I suspect that he has in mind Nichomachean Ethics, V.1 (1129b). It seems to me that Aristotle's point is not quite what Brentano takes it to be.
11 Brentano, , p. 149.
12 I have not been able to find this view in Brentano, but Moore, G. E. affirms something quite like it in Principia Etnica, Cambridge, 1962, in his discussion of retributive punishment. See pp. 214–16.
13 As I tried to show in ‘Adjusting Utility for Justice’.
14 For more extensive discussion of the concept of desert, see Feinberg, Joel, ‘Justice and Personal Desert’, Nomos VI: Justice, ed. Friedrich, C. J. and Chapman, J. W., New York, 1963, pp. 69–97; Kleinig, John, ‘The Concept of Desert’, American Philosophical Quarterly, viii (1971), 71–8; McLeod, Owen, ‘On Being Deserving’, dissertation, University of Massachusetts, forthcoming; Sadurski, Wojciech, Giving Desert Its Due: Social Justice and Legal Theory, Dordrecht, 1985; Sher, George, Desert, Princeton, 1987.
15 I discuss some temporal features of this sort of desert in my ‘Desert: Reconsideration of Some Received Wisdom’, Mind, civ (1995), 63–77. I try to show that it is possible for a person to deserve compensation at a time in virtue of injuries he has yet to receive.
16 In n. 38, on p. 310 of A Theory of Justice, Oxford, 1972, Rawls quotes a passage from ‘On the Ultimate Origination of the Universe’ in which Leibniz seems to say that moral worth is the sole basis on which a person might deserve anything.
17 For further discussion of views about the main desert bases, see Feinberg, McLeod, Sadurski, and Sher. For a comprehensive review of the literature on desert, see McLeod bibliography.
18 I acknowledge that there are a number of technical problems concerning the summation of value. For example, how are we to avoid ‘double-counting’ when several episodes of pleasure ‘overlap’? How are we to find the sum in cases in which there are infinitely many episodes? These are serious and tricky problems that deserve careful attention.
19 Equivalently, one could say that the worldly value of a world is the sum of the wordly values of the episodes that occur there.
20 Principle PIV(w) serves to define this value, which I take to be of little importance.
21 In saying that each person starts off deserving + 100, I commit myself to versions of two important and controversial moral doctrines. The first is egalitarianism. On the view I have proposed, people are equal with respect to their initial desert levels. Until we do evil, or suffer harms, or receive more than our fair shares, we are all alike with respect to desert. This is a version of egalitarianism. The second doctrine is a version of the thesis of moral considerability. According to this view, merely being a person gives each of us some ‘standing’ or ‘worth’. Under my interpretation, the impact of the thesis is that merely being human makes each of us deserve to enjoy some positive value.
22 Note that if a person receives substantially less good than he deserves, the value for the world of his receipt is negative. It might therefore appear that if a person is going to have such low receipt, someone should simply kill him, in order to prevent the evil from coming into the world. This line of thinking seems to me to be mistaken. The person is already in the world. If he is killed, he gets even less of the goods he deserves. The world would be better, not worse, if he were given the chance to receive his sub-par fraction of the good.
23 Near the end of ‘Problems of Population Theory’, McMahan sketches (and rejects) a possible solution to the Repugnant Conclusion based on the notion that everyone has a right to at least a certain minimum amount of value. There are obvious affinities between the view sketched by McMahan and the approach defended here.
24 It should be dear that the approach I have sketched also provides the basis for rebutting the claim of the fifth god, mentioned above in n. 4. In my discussion of Z, I appealed to the notion that it may be bad for people to receive far less good than they deserve. In order to deal with the world of fifth god, I appeal to the notion that it may be less good for people to receive vastly more good than they deserve. Another look at Graph I may serve to make the view clearer. Notice that the curve begins to flatten out as it moves northeast of Point A. The idea here is that as a person receives more than he deserves, the world begins to benefit less. In other words, receipts beyond the amount deserved have decreasing marginal value for the world. This explains why it is so much better for some value to be split equally among some equally deserving people, rather than giving some much more than they deserve, and others less. The distribution of value is not a zero sum game, in part because excessive receipts have diminishing value for the world. Although Graph I does not show this, it would be reasonable to suppose that the curve gradually flattens out, and approaches some maximum value. If the curve has this shape, the implication is clear. Once we reach the region where some person is receiving vastly more good than he deserves, increasing receipts begin to have infinitesimal value for the world. Graph I explains where the fifth god goes wrong. He proposes to create a world in which one person receives slightly more than a billion billion units of value. This would undoubtedly be unimaginably good for the sole recipient of the value. However, the world thus created would not be very good. Depending upon the precise shape of the curve, it might be that the value of the world would be around + 1000. All that personal value makes the world much better when it is distributed in smaller, equal amounts more closely approximating what people deserve.
25 Parfit, , p. 390. In the passage I cite, Parfit is actually talking about intergenerational justice, but it seems to me quite natural to extend his comment so as to make it apply to any sort of injustice.
26 Several Mends have raised his objection. I believe the first of them was Neil Schaefer.
27 It might appear that even if people start out deserving +100, they could misbehave and thereby lower their desert levels to +1. Then they could receive +1 and enhance the value of the world. If there were enough people, it might seem that they could somehow manage to raise the value of the world to the point where it exceeds the value of the A world. Could this happen? If it could happen, would a world in which it happens be better than the A world? What does the proposed axiology say about such a world? These questions are far more complex than they may at first appear to be. In order to make them somewhat more manageable, we can view the issue from the perspective of a little game. The game represents (in a very simplified form) the value-theoretic problem. Can the players behave in such a way as to make the value of their world very high, even though each of them has low desert and low receipt?
Rules of the game:
Any number of people can play. The play progresses in rounds. In each round, each player is permitted to make one move. A player can either (a) inflict some amount of pain on someone; or (b) inflict some amount of pleasure on someone.
Scoring: Each player has two scores – a receipt level (RL) and a desert level (DL). At the beginning of play, each player's DL = +100. Furthermore, the world has a score, or IV level. At the outset IV = 0.
Changing your score: there are several ways in which you can change your score.
a. If you inflict n units of pleasure on someone who has positive DL, your DL goes up by n/2 units; his RL goes up by n units; his DL goes down by n units; and IV goes up by 2n units.
b. If you inflict n units of pleasure on someone who has negative DL, your DL remains constant; his RL goes up by n units; his DL remains constant; and IV goes down by n/2 units.
c. If you inflict n units of pain on someone who has positive DL, your DL goes down by n/2 units; his RL goes down by n units; his DL goes up by n units; and IV goes down by 2n units.
d. If you inflict n units of pain on someone who has negative DL, your DL remains constant; his RL goes down by n units; his DL goes up by n units; and IV remains constant.
I think it would be very costly for the world's IV for players in this game to try to get their DL down to +1. Here is one way they could do it: in the first round each player could inflict 198 units of pain on a deserving person. Then (rule c) each person's DL would go down by 198/2, or 99. Thus each player would have DL = +1. The cost to the world would be (2 × 198 × the number of players). Suppose there are 10 players. Then at the end of round 1, IV = –5,960. In round 2, each player could inflict 1 unit of pleasure on someone who deserves 1 unit of pleasure. In this case, IV would go up by (2 × 1 × 10), or 20 pts. IV = –5,940.
At the end of round 2, each player would have DL = 0. Inflicting pleasure on others will be less valuable for the world in this case. [Sorry, no rules for this case.] Let us say its worth n/2 for the world. If it makes their DLs go negative, then further inflictions of pleasure will start to make IV get smaller (rule b).
There is another way in which players could reduce their DLs to +1. They could enjoy deserved pleasures. Let us suppose that each player starts out with DL = +100, and then enjoys +99. Then (rule a) each player's DL goes down to +1. At that point, each player could receive +1. My theory then implies that IV is extremely high. (Suppressing certain complexities, we can see that each player started out deserving +100, and received +100. If there are a billion billion players, IV = two hundred billion billion.) However, it seems quite clear to me that the imagined world would in fact be extraordinarily good, and so I am delighted that the axiology gives it such a high IV.
Moral of this story: it is hard to see how there can be a world that strikes our intuitions as being very bad, in which each resident deserves +1 and receives +1, and IV is extremely high.
* I am grateful to David Cowles, Richard Feldman, Neil Feit, Noah Lemos, Ned Markosian, Jeff McMahan, Kevin Moon, Neil Schaefer, Ted Sider, and Michael Zimmerman for generous comments and criticisms of earlier drafts. I am especially grateful to Peter Vallentyne, both for his comments on this paper at the Greensboro conference, and for other extensive and valuable discussion. Another version of this paper was presented at a conference at the University of Illinois at Champaign-Urbana in May, 1995.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.
Usage data cannot currently be displayed