Published online by Cambridge University Press: 12 September 2016
Robert Sparrow recently argued in this journal that several initially plausible arguments in favor of the deployment of autonomous weapon systems (AWS) in warfare are in fact flawed, and that the deployment of AWS faces a serious moral objection. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect for enemy combatants or, worse, they transmit an attitude of disrespect. In this reply we argue that this distinction between AWS and widely accepted weapons is illusory, and therefore cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that if deploying conventional soldiers in a given situation would be permissible, but we could expect to cause fewer civilian casualties by instead deploying AWS, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.
5 To be sure, the possibility of metaphysical indeterminacy in targeting decisions seems to be the impetus for the “responsibility gaps” objection, which Sparrow notes. We have addressed this objection elsewhere. See Purves, Jenkins, and Strawser, “Autonomous Machines.”
7 Indeed, Sparrow is willing to entertain this possibility. We, in fact, think the outcome is quite likely. Sparrow is worried, and justifiably so, about a machine's ability to understand and appreciate the nature of morality as a meaning-laden and contextual domain of knowledge and behavior. However, the results of recent advances in machine learning, which have been nothing short of staggering, have rendered moot these concerns about machine “understanding.” AlphaGo and Watson have made it clear that machines can outperform humans in domains where we once thought we enjoyed a great privilege and indomitable superiority. And this is true whether these machines understand the context in which they are acting, or the meaning and significance of their choices.
8 The fact that we cannot legitimately demand that AWS minimize civilian casualties seems significant only if it generates a “responsibility gap” that renders attributions of responsibility for the actions of AWS difficult or impossible. But this is not a new problem. For discussions of the problem of responsibility attributions, see Matthias, Andreas, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004), pp. 175–83Google Scholar; Sparrow, Robert, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (2007), pp. 62–77 Google Scholar; and Heather Roff, “Killing in War: Responsibility, Liability, and Lethal Autonomous Robots,” in Fritz Allhoff, Nicholas G. Evans, and Adam Henschke, eds., Routledge Handbook of Ethics and War: Just War Theory in the Twenty-First Century (Milton Park, Oxon: Routledge, 2013). The inability to make moral demands of machines may ultimately count against deploying human soldiers and in favor of deploying AWS. Robillard, Michael and Strawser, Bradley [“The Moral Exploitation of Soldiers,” Public Affairs Quarterly 30, no. 2 (2016)]Google Scholar have argued that soldiers are often victims of “moral exploitation” by having moral responsibility “outsourced” to them in virtue of their vulnerable position. Replacing human soldiers with AWS holds the potential to resolve this deontological worry about exploitation.
9 Jeff McMahan, Killing in War (New York: Oxford University Press, 2009).
10 Ryan Jenkins, “Cyberwarfare as Ideal War,” in Adam Henschke, Fritz Allhoff, and Bradley Strawser, eds., Binary Bullets: The Ethics of Cyberwarfare (New York: Oxford University Press, 2016).
11 Purves, Jenkins, and Strawser, “Autonomous Machines.”