Sparrow, Robert, “Robots and Respect: Assessing the Case against Autonomous Weapon Systems,” Ethics & International Affairs
30, no. 1 (2016), pp. 93–116
Nagel, Thomas, “War and Massacre,” Philosophy & Public Affairs
1, no. 2 (1972).
Purves, Duncan, Jenkins, Ryan, and Strawser, Bradley J., “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons,” Ethical Theory and Moral Practice
18, no. 4 (2015), pp. 851–72.
Norcross, Alastair, “Off Her Trolley? Frances Kamm and the Metaphysics of Morality,” Utilitas
20, no. 1 (2008), p. 65.
To be sure, the possibility of metaphysical indeterminacy in targeting decisions seems to be the impetus for the “responsibility gaps” objection, which Sparrow notes. We have addressed this objection elsewhere. See Purves, Jenkins, and Strawser, “Autonomous Machines.”
Slovic, Paul, “Perception of Risk,” Science
236 (1987), pp. 280–85. See also
Starr, Chauncey, “Social Benefit Versus Technological Risk,” Science
165 (1969), p. 1232.
Indeed, Sparrow is willing to entertain this possibility. We, in fact, think the outcome is quite likely. Sparrow is worried, and justifiably so, about a machine's ability to understand and appreciate the nature of morality as a meaning-laden and contextual domain of knowledge and behavior. However, the results of recent advances in machine learning, which have been nothing short of staggering, have rendered moot these concerns about machine “understanding.” AlphaGo and Watson have made it clear that machines can outperform humans in domains where we once thought we enjoyed a great privilege and indomitable superiority. And this is true whether these machines understand the context in which they are acting, or the meaning and significance of their choices.
The fact that we cannot legitimately demand that AWS minimize civilian casualties seems significant only if it generates a “responsibility gap” that renders attributions of responsibility for the actions of AWS difficult or impossible. But this is not a new problem. For discussions of the problem of responsibility attributions, see
Matthias, Andreas, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology
6, no. 3 (2004), pp. 175–83;
Sparrow, Robert, “Killer Robots,” Journal of Applied Philosophy
24, no. 1 (2007), pp. 62–77
; and Heather Roff, “Killing in War: Responsibility, Liability, and Lethal Autonomous Robots,” in Fritz Allhoff, Nicholas G. Evans, and Adam Henschke, eds., Routledge Handbook of Ethics and War: Just War Theory in the Twenty-First Century (Milton Park, Oxon: Routledge, 2013). The inability to make moral demands of machines may ultimately count against deploying human soldiers and in favor of deploying AWS.
Robillard, Michael and Strawser, Bradley [“The Moral Exploitation of Soldiers,” Public Affairs Quarterly
30, no. 2 (2016)] have argued that soldiers are often victims of “moral exploitation” by having moral responsibility “outsourced” to them in virtue of their vulnerable position. Replacing human soldiers with AWS holds the potential to resolve this deontological worry about exploitation.
Jeff McMahan, Killing in War (New York: Oxford University Press, 2009).
Ryan Jenkins, “Cyberwarfare as Ideal War,” in Adam Henschke, Fritz Allhoff, and Bradley Strawser, eds., Binary Bullets: The Ethics of Cyberwarfare (New York: Oxford University Press, 2016).
Purves, Jenkins, and Strawser, “Autonomous Machines.”