Skip to main content
×
×
Home

Robots and Respect: A Response to Robert Sparrow

  • Ryan Jenkins and Duncan Purves
  • In response to commentaries on:
Extract

Robert Sparrow recently argued in this journal that several initially plausible arguments in favor of the deployment of autonomous weapon systems (AWS) in warfare are in fact flawed, and that the deployment of AWS faces a serious moral objection. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect for enemy combatants or, worse, they transmit an attitude of disrespect. In this reply we argue that this distinction between AWS and widely accepted weapons is illusory, and therefore cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that if deploying conventional soldiers in a given situation would be permissible, but we could expect to cause fewer civilian casualties by instead deploying AWS, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.

Copyright
References
Hide All

NOTES

1 See Sparrow, Robert, “Robots and Respect: Assessing the Case against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016), pp. 93116 .

2 Nagel, Thomas, “War and Massacre,” Philosophy & Public Affairs 1, no. 2 (1972).

3 Purves, Duncan, Jenkins, Ryan, and Strawser, Bradley J., “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons,” Ethical Theory and Moral Practice 18, no. 4 (2015), pp. 851–72.

4 Norcross, Alastair, “Off Her Trolley? Frances Kamm and the Metaphysics of Morality,” Utilitas 20, no. 1 (2008), p. 65.

5 To be sure, the possibility of metaphysical indeterminacy in targeting decisions seems to be the impetus for the “responsibility gaps” objection, which Sparrow notes. We have addressed this objection elsewhere. See Purves, Jenkins, and Strawser, “Autonomous Machines.”

6 See Slovic, Paul, “Perception of Risk,” Science 236 (1987), pp. 280–85. See also Starr, Chauncey, “Social Benefit Versus Technological Risk,” Science 165 (1969), p. 1232.

7 Indeed, Sparrow is willing to entertain this possibility. We, in fact, think the outcome is quite likely. Sparrow is worried, and justifiably so, about a machine's ability to understand and appreciate the nature of morality as a meaning-laden and contextual domain of knowledge and behavior. However, the results of recent advances in machine learning, which have been nothing short of staggering, have rendered moot these concerns about machine “understanding.” AlphaGo and Watson have made it clear that machines can outperform humans in domains where we once thought we enjoyed a great privilege and indomitable superiority. And this is true whether these machines understand the context in which they are acting, or the meaning and significance of their choices.

8 The fact that we cannot legitimately demand that AWS minimize civilian casualties seems significant only if it generates a “responsibility gap” that renders attributions of responsibility for the actions of AWS difficult or impossible. But this is not a new problem. For discussions of the problem of responsibility attributions, see Matthias, Andreas, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004), pp. 175–83; Sparrow, Robert, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (2007), pp. 6277 ; and Heather Roff, “Killing in War: Responsibility, Liability, and Lethal Autonomous Robots,” in Fritz Allhoff, Nicholas G. Evans, and Adam Henschke, eds., Routledge Handbook of Ethics and War: Just War Theory in the Twenty-First Century (Milton Park, Oxon: Routledge, 2013). The inability to make moral demands of machines may ultimately count against deploying human soldiers and in favor of deploying AWS. Robillard, Michael and Strawser, Bradley [“The Moral Exploitation of Soldiers,” Public Affairs Quarterly 30, no. 2 (2016)] have argued that soldiers are often victims of “moral exploitation” by having moral responsibility “outsourced” to them in virtue of their vulnerable position. Replacing human soldiers with AWS holds the potential to resolve this deontological worry about exploitation.

9 Jeff McMahan, Killing in War (New York: Oxford University Press, 2009).

10 Ryan Jenkins, “Cyberwarfare as Ideal War,” in Adam Henschke, Fritz Allhoff, and Bradley Strawser, eds., Binary Bullets: The Ethics of Cyberwarfare (New York: Oxford University Press, 2016).

11 Purves, Jenkins, and Strawser, “Autonomous Machines.”

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Ethics & International Affairs
  • ISSN: 0892-6794
  • EISSN: 1747-7093
  • URL: /core/journals/ethics-and-international-affairs
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 31
Total number of PDF views: 238 *
Loading metrics...

Abstract views

Total abstract views: 566 *
Loading metrics...

* Views captured on Cambridge Core between 12th September 2016 - 27th May 2018. This data will be updated every 24 hours.