Hostname: page-component-8448b6f56d-qsmjn Total loading time: 0 Render date: 2024-04-24T03:43:56.981Z Has data issue: false hasContentIssue false

How Much is Rule-Consequentialism Really Willing to Give Up to Save the Future of Humanity?

Published online by Cambridge University Press:  24 November 2016

PATRICK KACZMAREK*
Affiliation:
University of Glasgows1267906@ed-alumni.net

Abstract

Brad Hooker argues that the cost of inculcating in everyone the prevent disaster rule places a limit on its demandingness. My aim in this article is show that this is not true of existential risk reduction. However, this does not spell trouble for the reason that removing persistent global harms significantly improves our long-run chances of survival. We can expect things to get better, not worse, for our population.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Hooker, B., Ideal Code, Real World: A Rule-Consequentialist Theory of Morality (Oxford, 2000), p. 32 Google Scholar.

2 Hooker, Ideal Code, p. 98.

3 Hooker, Ideal Code, p. 169.

4 Hooker, Ideal Code, p. 165.

5 Hooker, Ideal Code, pp. 170–1.

6 Hooker, B., ‘Rule-Consequentialism, Incoherence, Fairness’, Proceedings of the Aristotelian Society 95 (1995), pp. 1935 CrossRefGoogle Scholar, at 26.

7 Mulgan, T., ‘Utilitarianism for a Broken World’, Utilitas 27 (2015), pp. 92114, at 111CrossRefGoogle Scholar.

8 Hooker, Ideal Code, p. 166.

9 Hooker, Ideal Code, p. 166.

10 See Hooker, Ideal Code, pp. 172–3.

11 Bostrom, N., ‘Existential Risk Reduction as a Global Priority’, Global Policy 4 (2013), pp. 1531 CrossRefGoogle Scholar, at 15.

12 An allusion to Bostrom, N., ‘Astronomical Waste: The Opportunity Cost of Delayed Technological Development’, Utilitas 15 (2003), pp. 308–14CrossRefGoogle Scholar.

13 Parfit, D., Reasons and Persons (Oxford, 1984), pp. 453–4Google Scholar. See also Parfit, D., On What Matters (Oxford, 2011), pp. 616 Google Scholar, 620.

14 An anonymous referee flagged the question of why persons fail to think of an existential threat as being like an attack by a malicious enemy. This pressing consideration is too big to address adequately in the article. I leave it for another day.

15 Parfit, Reasons, p. 454. For a similar conclusion that does not rely on the argument from additional lives, the interested reader may turn to Kahane, G., ‘Our Cosmic Insignificance’, Noûs 48 (2014), pp. 745 CrossRefGoogle ScholarPubMed–72.

16 For a detailed treatment of existential risk reduction see especially Bostrom, Global Priority.

17 In other words, my argument will not move those readers endorsing the Procreative Asymmetry – roughly, the idea that there is no moral reason to bring a person into existence just because her life would be happy, but there would be reason to prevent a life from coming into existence if it were not worth living ( McMahan, J., ‘Problems of Population Theory’, Ethics 92 (1981), pp. 96127)CrossRefGoogle Scholar. (See also McMahan, J., ‘Asymmetries in the Morality of Causing People to Exist’, Harming Future Persons: Ethics, Genetics and the Nonidentity Problem, ed. Roberts, M. and Wasserman, D. (Berlin, 2009), pp. 4968.CrossRefGoogle Scholar) I find no purchase in the Procreate Asymmetry. For those who do, though, my case can be reworded by narrowing our definition of existential catastrophe to what Tim Mulgan dubs a broken world. A broken world allows us to ignore the poor unlived masses of possible lives in our moral evaluations while preserving the notion of catastrophes that would decimate the long-term potential of humanity. Mulgan describes ‘[this as] a place where resources are insufficient to meet everyone's basic needs, where a chaotic climate makes life precarious, where each generation is worse-off than the last, and where our affluent way of life is no longer an option’ (Mulgan, Broken World, p. 93). Just as with extinction, the loss of comparable goods, had things only gone smoother in history for actual persons, is astronomical. For the remainder of the article I will refer only to extinction events for ease of explication.

18 To see this, suppose that the potential exists for at least 1016 human lives of normal duration. If the risk of doomsday is x, 1 > x > 0, then a back-of-the-envelope calculation shows that improving our chances of survival by even a tiny fraction of a per cent (x − 0.00001) results in an expected 100,000,000,000 additional lives for our population.

19 Parfit, Reasons, p. 388.

20 As I am asking readers to understand the difference between direct and indirect approaches, one indirectly averts the threat of an existential catastrophe by solving intermediary problems which provide benefits outside this particular intervention. A direct approach to reducing the risk of nuclear war, for example, could involve sending UN inspectors to Iran to ensure regulations are being followed with respect to their nuclear facilities. This solution offers benefits isolated to the nuclear threat. On the other hand, sending aid packages to feed and clothe the poorest of Iran's people extends beyond helping the poor by (arguably) stabilizing geopolitics in that region of the world.

21 See especially Nick Beckstead's ‘On the Overwhelming Importance of Shaping the Far Future’ (PhD dissertation, Rutgers, 2013). These are sometimes referred to as cascading, knock-on or flow-through effects.

22 Beckstead, Overwhelming Importance, p. 6.

23 The complexity of priority-setting according to ripple effects from persistent global harms is, to be sure, immense. It could (correctly) be said that such a rule is too complicated to be effectively internalized by persons. However, this shouldn't count as another cost of internalization since calculations of this sort should take place at an institutional (or collective) level, not around the dinner table. Rather, individuals have the burden of internalizing a general rule towards minimizing existential risk.

24 See Bostrom, Global Priority, pp. 21, 24–5.

25 Hilary Greaves raised the worry (in personal communication) that speeding up our development in this way might present with its own existential risks (even if removing persistent global harms happens to limit or block certain pathways towards humanity's doom). To illustrate, we might put ourselves in the position of creating a dangerous technology sooner. My own reaction is that anthropogenic threats of this kind present as a step risk, not a state risk. That is to say: the severity of the risk depends on how we transition from an earlier to a later stage – not how long we are exposed to the possible threat. In Bostrom's words, ‘[the] amount of step risk associated with a transition is usually not a simple function of how long the transition takes. One does halve the risk of traversing a minefield by running twice as fast’ ( Bostrom, N., Superintelligence (Oxford, 2014), p. 234 Google Scholar). As I see things, an indirect approach should be determined by considering how our leverage with respect to abating catastrophic risk is affected. As I have argued, eradicating global hunger will mean we are better prepared for handling existential threats. More so, I argued that allowing global hunger to continue ravaging our world will harm our chances of surviving such threats. So, we are better off even if the threat presents sooner than it would have had global hunger continued to plague the world's poorest. Therefore, it seems we can expect the threat of catastrophe won't go up simply by accelerating our development. However, that being said, we are problematically clueless about how particular interventions will shape the far future. For relevant discussion, see especially H. Greaves, ‘Cluelessness’, Proceedings of the Aristotelian Society, suppl. vol. 116 (2016).

26 Hooker, Ideal Code, p. 172.

27 Mulgan, Broken World, p. 113.

28 I wish to thank Mikio Agaki, Campbell Brown, Ben Colburn, James Humphries, Robyn Kath, Chris Mills, Japa Pallikkathayil and Catherine Robb for their insightful feedback on earlier drafts of the article. I am especially indebted to two anonymous referees for providing highly detailed comments that sharpened both my thesis and its presentation. For fruitful discussion, I am grateful to Nick Bostrom, John Cusbert, Daniel Dewey, Eric Drexler, Hilary Greaves and Daniel Kokatajlo. Finally, I extend my gratitude towards the Future of Humanity Institute for hosting me, as well as supplying the grist from which this article's arguments were formed.