Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-27T10:20:06.321Z Has data issue: false hasContentIssue false

Banning Autonomous Weapons: A Legal and Ethical Mandate

Published online by Cambridge University Press:  01 December 2023

Mary Ellen O'Connell*
Affiliation:
University of Notre Dame, Indiana, United States (MaryEllenOConnell@nd.edu)
Rights & Permissions [Opens in a new window]

Abstract

ChatGPT launched in November 2022, triggering a global debate on the use of artificial intelligence (AI). A debate on AI-enabled lethal autonomous weapon systems (LAWS) has been underway far longer. Two sides have emerged: one in favor and one opposed to an international law ban on LAWS. This essay explains the position of advocates of a ban without attempting to persuade opponents. Supporters of a ban believe LAWS are already unlawful and immoral to use without the need of a new treaty or protocol. They nevertheless seek an express prohibition to educate and publicize the threats these weapons pose. Foremost among their concerns is the “black box” problem. Programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. They cannot know at the time of deployment if the system will comply with the prohibition on the use of force or the human right to life that applies in both war and peace. Even if they could, mechanized killing affronts human dignity. Ban supporters have long known that “AI models are not safe and no one knows how to reliably make them safe” or morally acceptable in taking human life.

Type
Roundtable: Global Governance and Lethal Autonomous Weapon Systems
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of Carnegie Council for Ethics in International Affairs

The ongoing debate over the law and ethics of lethal autonomous weapon systems (LAWS) reflects two very different perspectives. One sees the development of such weapons as an essential part of national security that depends on a strong military in possession of the latest weapons technology.Footnote 1 If legal or ethical norms impede staying ahead in the race for new weapons, those norms need to be reinterpreted or modified. The other perspective maintains that security depends first and foremost on robust respect for legal principles that are derived from fundamental moral principles. Such principles are not subject to reinterpretation or modification. The proper role for military force is defending the rule of law, not superseding it.Footnote 2

The military superiority perspective has been shaped by the twentieth-century political theory of realism.Footnote 3 Realism has been deeply influential and helps account for why major militaries spend far more on weapons development than green technologies or national and international governance institutions. It also helps explain why China, Russia, and the United States have made it a common cause to prevent a ban on LAWS.Footnote 4 The other perspective dates to the emergence of law among the earliest human groups as an alternative to physical force in the ordering of society.Footnote 5 This rule of law perspective has been shaped by the theory of natural law that combines legal and moral teaching and can be seen in the efforts by many states, technologists, civil society movements—such as the Campaign to Stop Killer Robots—and the Vatican to ban LAWS.Footnote 6

The aim of the essay is not to attempt to bridge the gap between the realist and rule-of-law perspectives. The goal is to explain why those who hold the second perspective, that security depends on respect for legal principles, also support a ban on LAWS. The analysis proceeds in three parts. The first section briefly describes LAWS, emphasizing aspects of the technology that make fully autonomous robotic weapons inherently unlawful and unethical under prevailing legal and normative standards. The second section presents an overview of the standards, including why they are not subject to reinterpretation to justify the use of LAWS. The third section takes up and challenges two common arguments against a ban: (1) that LAWS can comply more closely with rules than humans and (2) that LAWS and their use are inevitable with or without a ban. It is better, per this latter argument, to find some legal justification for them or else the law will be dismissed as an obstacle to security. With the launch of ChatGPT in November 2022, however, awareness is spreading rapidly of the potential harm posed by artificial intelligence, or AI.Footnote 7 LAWS rely on AI and pose among the greatest threats of all AI applications, creating weapons of potential mass destruction.

LAWS Defined

According to the U.S. Congressional Research Service, LAWS are “a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and . . . destroy the target without manual human control of the system.”Footnote 8 In the United States, ongoing research by the Department of Defense and the defense industry on LAWS follows from the development of unmanned weapon systems known as drones. Air, land, and sea drones already have the capacity to attack with significant independence from human operators. Drones use sensors and can be programmed to strike particular targets without much input from a human being at a control station. LAWS go a critical step beyond drones, however, by being programmable to select targets unknown at the time the weapon is deployed. Also, unlike current drones, there is no need for a human operator to be associated with a weapon after deployment. LAWS can roam in search of targets without regard to time, place, or human oversight. Any restrictive parameters included in the original programming can be superseded by the computer-learning algorithm.

AI is the distinctive feature of LAWS. Weapon systems equipped with AI have the “ability to operate independently and engage targets without being programmed to specifically target an individual object or person. This includes the capability to react to a changing set of circumstances . . . The second, interrelated, aspect is the capability to make discretionary decisions.”Footnote 9 Not only will LAWS make discretionary target selection decisions, the systems will be capable of deciding where and when to attack. The complexity of the computer-learning functions that produce such capacity means that the most advanced computer scientists cannot foresee what decisions computerized weapon systems will reach. Learning programs are a “black box.”Footnote 10 The term “black box” refers to the theory of why it is impossible to predict how a computer-learning program will reach decisions, owing to the problem of overseeing massive data inputs.Footnote 11 There is “no symmetry between prediction and retrodiction . . . since an infinity of past histories may end up in the final state R(t).”Footnote 12 In other words, “Many of these algorithms are black boxes even to their creators” because “there is no straightforward way to map out the decision-making process of these complex networks of artificial neurons.”Footnote 13

Learning programs dynamically generate their decisions based on input data, meaning that even the primary AI coders will lack the ability to understand its decision-making strategies. Elke Schwarz, writing for a blog of the International Committee of the Red Cross, warns, “AI might engage in calculations that are not intelligible even to programmers or engineers. For human decision makers to be able to retain agency over the morally relevant decisions made with AI, they would need a clear insight into the AI black box, to understand the data, its provenance and the logic of its algorithms.”Footnote 14 To date, the best way to begin to understand how AI works is to observe results of AI decisions.Footnote 15 In other words, current knowledge of AI requires trial and error. In the case of AI weapons, this means observing who the program has selected to be killed. Eventually, observing such decisions might result in knowledge of why selections are made. That knowledge is not available now.

Because LAWS are the next step after drones, it might be assumed that they will deploy missiles and bombs for use by the military on a legally defined battlefield only.Footnote 16 The first weaponized drones carried two Hellfire missiles and were part of the U.S. military arsenal. It was not, however, the military that first used a weaponized drone. The CIA attempted to carry out an assassination using a drone far from any combat zone in 2000.Footnote 17 Today, police, the military, and private citizens all possess drones. They are designed for air, sea, and land and a wide variety of munitions. Further, the vehicles themselves can become munitions, as seen with kamikaze drones. Nano-drones exist that can in principle carry out targeted assassinations, whether using their blades to slice a target to pieces or through other ordinance, such as mortars, grenades, and light machine guns. Presumably, any munition could be attached to them, including nuclear bombs and missiles. If a drone can carry these weapons, so, too, we must assume, can LAWS.

Weapons Law

The potential scenarios involving LAWS are myriad, which means the relevant law is far broader than the law of armed conflict. At least four interrelated legal categories apply: human rights law, law on resort to force (jus ad bellum), law on the conduct of force (jus in bello), and arms control law.

Under human rights law, authorized law enforcement agents may use lethal force to save lives under immediate threat.Footnote 18 The only other rights to intentionally use lethal force are provided in the United Nations Charter principles on initial resort to force or in international humanitarian law (IHL), including the 1949 Geneva Conventions, on using lethal force during armed conflict. In any case of doubt as to whether a situation falls under human rights law, the UN Charter, or IHL, the higher protection of human rights law is presumed to apply.

The UN Charter regime on resort to force begins with Article 2(4), which generally prohibits the interstate use of force. The Charter's drafters intended “to state in the broadest terms an absolute all-inclusive prohibition.” They wanted “no loopholes.”Footnote 19 The UN Security Council may authorize the use of armed force when other nonviolent means to address a threat to the peace, breach of the peace, or act of aggression prove inadequate.Footnote 20

Following Russia's full-scale invasion of Ukraine in 2022, however, it is unlikely that the Security Council again will authorize a major intervention for the foreseeable future. Authorizations have slowed owing to the poor results of peace enforcement missions. China and Russia remain highly critical of NATO's abuse of the authorization to use force in Libya in 2011 that resulted in the ongoing civil war. The minimal trust that existed to win that authorization has evaporated. Russia's decision to invade Ukraine in 2014 and 2022 has further entrenched divisions.Footnote 21

Other than Security Council authorization, the only basis in the Charter to use force is provided by Article 51, the right of self-defense. The terms of Article 51 are highly restrictive. Resort to force is permitted “if an armed attack occurs” until the Security Council acts. The defending state must promptly report its actions. In addition, restrictions from general law beyond the Charter apply, the most important of which are the principles of attribution, necessity, and proportionality. The principle of attribution mandates that any use of force in self-defense must aim only at a state responsible for a significant armed attack on the defending state. Even then, a counterattack must be necessary to achieve legitimate defense and must be proportionate to the injury sustained. Defending states may request assistance in collective self-defense, but any such request must come from a government in effective control of the state. “Effective control” is the standard test in international law for identifying an entity that qualifies legally as a state's government.Footnote 22

If self-defense turns into armed conflict—intense exchange of armed fighting of some duration—international law permits the armed forces of a party to the conflict to intentionally target the armed forces of the adversary.Footnote 23 All targeting within armed conflict is governed by the four fundamental principles of IHL: civilian distinction, necessity, proportionality, and humanity. These and other in bello rules mean that certain weapons are unlawful to use, such as weapons that are indiscriminate and risk killing civilians in disproportionate numbers, as well as weapons that cause unnecessary suffering.

The United States was at the forefront of drafting the Charter principles and the IHL principles codified in the Geneva Conventions in the 1940s. In the wake of the catastrophe of World War II, states returned to first principles, to the immutable natural law norms prohibiting force and honoring human dignity; in particular, the right to life in war and peace. By the early 1960s, however, realism and related theories such as legal positivism influenced a shift toward the Cold War mentality that elevated military security over law and diplomacy. Nevertheless, for decades, the United States and Soviet Union manipulated facts rather than tampering with legal interpretations. The interventions in Hungary, Vietnam, Czechoslovakia, Afghanistan, and Grenada were falsely presented as justified on the basis of genuine government consent. With the end of the Cold War, the United States began using force in violation of the Charter, at first by saying little or nothing, then with newly invented legal claims.Footnote 24 To justify drone strikes beyond armed conflict zones, for example, the United States has cycled through four approaches: maintaining secrecy—refusing to acknowledge responsibility or to comment on covert operations; declaring a “global war on terrorism”; reinterpreting Article 51 to permit current attacks against future potential threats; and claiming that the United States has the right to attack when it deems a state is “unable or unwilling” to resolve a terrorist threat. The unable or unwilling argument may be the weakest of all in that it lacks all features of legality by depending completely on a subjective assessment by Washington in place of the objective evidence of the armed attack requirement per Article 51.Footnote 25

The attempts to justify drone attacks outside armed conflict zones are particularly noteworthy for their departure from the principles American leaders promoted through the early 1960s. Attempting to stretch the law to meet policy overlooks the fact that the prohibition on the use of force and the inherent dignity of all human beings are premised on enduring natural law precepts and substantive principles. The prohibition on force and the fundamental IHL principles are peremptory natural law norms not subject to diminution through reinterpretation, new treaties, or new rules of customary law.Footnote 26 Such norms endure regardless of technological breakthroughs.

LAWS Banned

The prohibition on the use of force, which aims at protecting the human right to life, supplies the normative basis for the worldwide movement to ban LAWS. Proponents of LAWS have responded to the movement with various arguments in addition to the realist mandate of staying ahead in the arms race. Two arguments are reviewed here. First, defenders of LAWS argue that computers will follow the law more closely than human beings are able to; and second, that, regardless of the law, autonomous weapons are the future. It is better, therefore, to adjust the law to meet the technology than for the law to be ignored.

With respect to LAWS being superior to human operators, the U.S. government has argued:

Emerging technologies in the area of LAWS . . . reduce the risk of civilian causalities and damage to civilian objects . . . [;]

Emerging technologies in the area of lethal autonomous weapons systems could be used to create entirely new capabilities that would increase the ability of States to reduce the risk of civilian casualties in applying force …

Rather than trying to stigmatize or ban such emerging technologies in the area of lethal autonomous weapon systems, States should encourage such innovation that furthers the objectives and purposes of the Convention [on Certain Conventional Weapons].Footnote 27

These points do not, however, engage the preliminary legal issue of the right to resort to force in the first place. They are concerned with battlefield use of LAWS once an armed conflict has been initiated. Battlefield use is the topic of discussion at the weapons review meetings of parties to the Convention on Certain Conventional Weapons (CCW) in Geneva.Footnote 28 The CCW ensures that weapons do not kill indiscriminately or cause unnecessary suffering. These are important concerns regarding LAWS, but the CCW does not deal with the essential principles restricting resort to force in the first place. If an autonomous weapons system has resorted to armed force in violation of the prohibition on the use of force, it is irrelevant that it complies with the principles governing discrimination and humanity. Because the learning function of LAWS leads to unknowable outcomes, predicting whether a robot will decide to resort to force in violation of international law cannot be known. Moreover, programming may incorporate flawed views on when resort to force is lawful. A human may program a particular target into a robot's code before launch and include appropriate information on when to use that force, but an influx of contingent and unknowable information may easily shift the robot away from its programmed directives. It is foreseeable that because LAWS are “complex, unpredictable, and extremely fast in their functioning, these systems would have the potential to make armed conflicts spiral rapidly out of control, leading to regional and global instability.”Footnote 29 Equally, they have the potential to start armed conflicts in violation of the law—a potentially greater cause of future global instability than exacerbating existing wars.

In addition to unlawful resort to force, AI weapons must be treated as though they could potentially kill indiscriminately. Not only do programmers not know who might be killed, they do not know how they will be killed. A programmer may specify members of ISIS as targets at the time of deploying the weapon, but the system may then “learn” to attack Iran's anti-government protestors. Many of the potential data inputs for ISIS apply equally to the Iranian government's characterization of protestors—Iran calls them “terrorists”; they are surveilled; detained without a fair trial; tortured; and executed. The point is that the results of learning programs are unpredictable, which means there is the potential for indiscriminate killing, which is a ground for banning weapons under the CCW. Arthur Holland Michel agrees that “the unrestricted employment of a completely unpredictable autonomous weapon system that behaves in entirely unintelligible ways would likely be regarded as universally injudicious and illegal.”Footnote 30

Even if the problem of unpredictability—the black box problem—can be solved, the missing human conscience is an insurmountable barrier to the acceptability of LAWS. IHL requires that military commanders oversee all battlefield operations or face criminal liability.Footnote 31 The United States recognizes this but attempts to water it down to fit autonomous weapons. Department of Defense Directive 3000.09 “requires [that] all weapons systems, including LAWS . . . ‘allow commanders and operators to exercise appropriate levels of human judgment over the use of force.’”Footnote 32 However, “‘human judgment over the use of force’ does not require manual human ‘control’ of the weapons system,” only “broader human involvement.”Footnote 33

“Broader human involvement” appears to fall short of the responsible commander's oversight duty. The standard would be met in the guidelines proposed by prominent computer scientist Noel Sharkey. He sets out parameters for “meaningful human control” of LAWS. They entail: (1) full contextual and situational awareness of a specific attack; (2) the ability to perceive unexpected change in circumstances; (3) retention of power to suspend or abort the attack; and (4) time for deliberation on the significance of the attack.Footnote 34 This standard not only meets the legal standard under the law of armed conflict, it also answers the concern of many commentators that LAWS cannot be held responsible for violations of IHL or human rights law. Weapons, unlike human beings, cannot be put on trial for criminal actions.Footnote 35 Human involvement at the level Sharkey details amounts to a ban on autonomous systems.

In addition to the oversight problem, the lack of a human conscience also raises a novel human rights concern: mechanized killing. Today, the standard way cattle are processed for human consumption in the United States begins with their slaughter by robots. A human conscience is not focused on the killing once the decision has been taken that a group of animals will be killed. In the future, human beings could also be subjected to death by machine. The computer will decide who dies and carry out the actual killing. The Vatican's Archbishop Silvano Tomasi emphasizes that “decisions over life and death inherently call for human qualities, such as compassion and insight.”Footnote 36 Although “imperfect human beings may not perfectly apply such qualities in the heat of war, these qualities are neither replaceable nor programmable.”Footnote 37 Machines can never, by definition, possess these human qualities. Out of respect for human dignity, a person needs to be present when the decision to intentionally take the life of a being possessing dignity is made. Regardless of how sophisticated computers become in exemplifying compassion, empathy, altruism, emotion, or other qualities, the machine will be mimicking, not experiencing them as only a human can. Even the most emotionally or psychologically impaired human being remains human, imbued with dignity. He or she may not make morally appropriate decisions to kill, but the decisions made by a person with dignity will be more respectful of the victim's own human dignity than any mechanized form of slaughter.

All of the principles emphasized here—the prohibition on force, the right to life, the protections in combat—cannot be changed to suit proponents of LAWS. Similar arguments have been tried since the end of the Cold War—arguments that aim at removing existing legal restrictions on both resort to force and on the use of new weapons. Natural law norms, however, endure. They are not subject to elimination through new arguments or reinterpretations of the law. While they cannot diminish natural law norms, such arguments can lead to confusion about what the law genuinely requires, and confusion plays a role in noncompliance. Russia's war against Ukraine demonstrates the cost of weakening the restraining norms on the use and conduct of armed force. Even if Russia, China, and the United States continue to block a ban at the CCW, supporters of a ban will be right to persist in the effort at the CCW and in other forums. The campaign for a ban reminds humanity to “be mindful that these architectures might affect our ethical thinking and acting in ways that move ever-further from a humanist framework and etch ever closer to the purely cost-calculative logic of machines within which our moral agency inevitably atrophies.”Footnote 38

The fact that the principles prohibiting LAWS are enduring also responds to the argument that LAWS are inevitable, and that a ban is therefore pointless. The most that can be hoped for, so this argument goes, is a set of rules that place restrictions on the use and possession of LAWS. Better to have some laws in place than a ban that will render the law a shibboleth.Footnote 39 The argument also typically incorporates the view that rules to be applied to the use of LAWS will need to be created for this specific purpose. The technology is new, so the assertion is that the rules will need to be tailor made to fit.

Both arguments—the “ban as pointless” and the need for tailor-made rules—are as old as weapons research and development. In the past, they helped push states to accept weapons regardless of the law, for example, the use of newly invented airplanes to bomb undefended cities and the use of unmanned systems to launch missile strikes in the absence of an armed attack (as required under UN Charter Article 51) and far from armed conflict hostilities. If, however, a ban is put in place and states deploy LAWS despite it, they will be lawbreakers. The appearance of new technology provides no defense. Even without a ban, the principles set out here are fully sufficient to make the use of LAWS unlawful.Footnote 40 The technology may be new, but its purpose is ancient. Weapons from stones to LAWS are meant to kill. Law on the right to life manifested in the prohibition on the use of force and the restrictions on killing in combat will always be applicable, regardless of what weapons research and development may deliver next.

Conclusion

An express ban will help promote the international legal principles that prohibit killing by fully autonomous robotic weapons. While not a necessity—because natural law norms apply now to prohibit LAWS—a ban can educate technologists and those acquiring their inventions as the dark facts of AI become known. We already know “that current state-of-the-art AI models are not safe and no one knows how to reliably make them safe.”Footnote 41 The majority of states, as well as prominent human rights organizations, the Vatican,Footnote 42 and many—perhaps most—AI researchers, including the man known as the father of AI, Geoffrey Hinton, stand on the side of banning AI weapons.Footnote 43 Supporters of a ban have ancient, enduring law on their side. What is needed now are more international lawyers fully committed to teaching, writing, and litigating about the actual law at issue, not the law some governments might prefer. Greater knowledge of this law is needed, as well as insights into how to improve compliance with it, and ethicists and international relations scholars can teach that security lies in the consensus legal norms of the world community that draw on ancient, transcendent moral insights. Governments still mired in an arms race mentality are on the wrong side of history.

References

Notes

1 See Congressional Research Service, Defense Primer: U.S. Policy on Lethal Autonomous Weapons Systems (Washington, D.C.: Congressional Research Service, 2022), crsreports.congress.gov/product/pdf/IF/IF11150.

2 See O'Connell, Mary Ellen, “Reestablishing the Rule of Law as National Security,” in Greenberg, Karen J., ed., Reimagining the National Security State: Liberalism on the Brink (Cambridge, U.K.: Cambridge University Press, 2020), pp. 154–69Google Scholar.

3 On realism, see Morgenthau, Hans J., Politics among Nations: The Struggle for Power and Peace, rev. Thompson, Kenneth W. and Clinton, W. David (Boston: McGraw-Hill, 2006)Google Scholar; and W. Julian Korab-Karpowicz, “Political Realism in International Relations,” in Stanford Encyclopedia of Philosophy, last updated Summer 2018, plato.stanford.edu/entries/realism-intl-relations/.

4 Nina Werkhäuser, “‘Killer Robots’: Will They Be Banned?,” Deutsche Welle, July 25, 2022, www.dw.com/en/killer-robots-will-they-be-banned/a-62587436.

5 O'Connell, Mary Ellen, “Measuring the Art of International Law,” Chicago Journal of International Law 22, no. 1 (Summer 2021), p. 136Google Scholar.

6 For more on the supporters of a ban on LAWS, see the “Our Member Organisations” page on the Stop Killer Robots website at www.stopkillerrobots.org/a-global-push/member-organisations/.

7 Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, May 30, 2023, www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.

8 Congressional Research Service, Defense Primer, p. 1 (italics added).

9 Wagner, Markus, “The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapons Systems,” Vanderbilt Journal of Transnational Law 47 (2014), pp. 1371–424, at p. 1383Google Scholar.

10 Karen Yeung, “Reconceptualizing International Law's Role in the Governance of AI: Autonomous Weapons as a Case Study” (remarks, closing plenary, American Society of International Law annual meeting, sponsored by the Hague, March 26, 2021).

11 The UN Institute for Disarmament Research defines a black box as “a system for which we know the inputs and outputs but can't see the process by which the former turns into the latter.” Arthur Holland Michel, The Black Box, Unlocked: Predictability and Understandability in Military AI, p. III (Geneva: United Nations Institute for Disarmament Research, 2020), unidir.org/sites/default/files/2020-09/BlackBoxUnlocked.pdf.

12 Bunge, Mario, “A General Black Box Theory,” Philosophy of Science 30, no. 4 (October 1963), pp. 346–58, at p. 347CrossRefGoogle Scholar.

13 Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation,” Harvard Journal of Law & Technology 31, no. 2 (2018), pp. 891–92. See also Armin Krishnan, “Enforced Transparency: A Solution to Autonomous Weapons as Potentially Uncontrollable Weapons Similar to Bioweapons,” in Jai Galliott, Duncan MacIntosh, and Jens David Ohlin, eds., Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare (Oxford: Oxford University Press, 2021), pp. 219–36, at p. 221. “A network's reasoning is embedded in the behavior of thousands of simulated neurons, arranged in dozens or even hundreds of intricately connected layers.” Ibid, pp. 891–92.

14 Elke Schwarz, “The [Im]possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems,” Humanitarian Law & Policy blog, International Committee of the Red Cross, August 29, 2018, blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/.

15 Yampolskiy, Roman V., “Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent,” Journal of Artificial Intelligence and Consciousness 7, no. 1 (2020), pp. 109–18, at p. 110, 114Google Scholar.

16 For a discussion of concepts such as the “legally defined battlefield,” see O'Connell, Mary Ellen, “Combatants and the Combat Zone,” University of Richmond Law Review 43 (2009), pp. 845–63Google Scholar.

17 Woods, Chris, Sudden Justice: America's Secret Drone Wars (Oxford: Oxford University Press, 2015), p. 37Google Scholar.

18 Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, “Basic Principles on the Use of Force and Firearms by Law Enforcement Officials,” September 7, 1990, www.ohchr.org/en/instruments-mechanisms/instruments/basic-principles-use-force-and-firearms-law-enforcement#:~:text=Law%20enforcement%20officials%20shall%20not,a%20danger%20and%20resisting%20their.

19 “Documents of the United Nations Conference on International Organization,” vol. 6 (San Francisco, April 25, 1945), p. 335.

20 See Arts. 24, 25, 39–42, Charter of the United Nations, June 26, 1945.

21 See United Nations Security Council, Res. 1973 (2011), S/RES/1973 (2011), March 17, 2011; and Alexei Anishchuk, “UPDATE 1—Russia, China Urge Adherence to Libya Resolutions,” Reuters, updated June 16, 2011, www.reuters.com/article/libya-russia-china-idAFLDE75F13V20110616.

22 The authoritative interpretation of these restrictions on the use of force is found in two decisions of the UN's International Court of Justice. On the requirement of an armed attack as well as the principles of necessity and proportionality, see Military and Paramilitary Activities in and against Nicaragua (Nicaragua v. United States of America), Merits, Judgment, 1986 I.C.J. Reports (Jun. 27), paras. 195, 247, and 176. On attribution, necessity, and proportionality, see Oil Platforms (Islamic Republic of Iran v. United States of America), Merits, Judgment, 2003 I.C.J. Reports (Nov. 6), paras. 72 and 76–77. For a leading, comprehensive analysis of international law on the use of force, see Christine Gray, International Law and the Use of Force, 4th ed. (Oxford: Oxford University Press, 2018).

23 International Law Association, Final Report on the Meaning of Armed Conflict in International Law (Hague Conference, 2010), www.ila-hq.org/en_GB/documents/conference-report-the-hague-2010-12.

24 Mary Ellen O'Connell, “Forever Air Wars and the Lawful Purpose of Self-Defence,” Journal on the Use of Force and International Law 9, no. 1 (2022), pp. 33–54.

25 Ibid.

26 Ibid.

27 Charles Trumbull, “U.S. Statement on LAWS: Potential Military Applications of Advanced Technology” (statement, First Session of the Group of Governmental Experts on Lethal Autonomous Weapons Systems [LAWS], Geneva, March 25, 2019).

28 Office for Disarmament Affairs, United Nations, “Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects,” October 10, 1980, treaties.unoda.org/t/ccwc.

29 “Military and Killer Robots,” Stop Killer Robots, www.stopkillerrobots.org/military-and-killer-robots/.

30 Michel, The Black Box, Unlocked, p. 10.

31 The requirement of commander oversight of battlefield operations is found throughout IHL. It is reflected, for example, in the targeting rules of Additional Protocol I to the 1949 Geneva Conventions. See Arts. 48, 51(4)(a)–(c), 51(5)(b), and 57(2), Diplomatic Conference on the Reaffirmation and Development of International Humanitarian Law Applicable in Armed Conflicts, “Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protections of Victims of International Armed Conflicts (Protocol I),” June 8, 1977. It is also seen in Rule 153 of the International Committee of the Red Cross's study of customary international law: “Commanders and other superiors are criminally responsible for war crimes committed by their subordinates if they knew, or had reason to know, that the subordinates were about to commit or were committing such crimes and did not take all necessary and reasonable measures in their power to prevent their commission, or if such crimes had been committed, to punish the persons responsible.” Jean-Marie Henckaerts and Louise Doswald-Beck, International Customary International Humanitarian Law, vol. 1, Rules (New York: Cambridge University Press, 2005), p. 558, www.icrc.org/en/doc/assets/files/other/customary-international-humanitarian-law-i-icrc-eng.pdf. Judicial decisions reflect the standard as well. See Prosecutor v. Stanislav Galić, IT-98-29-A, Appeal Judgment, 2006 I.C.T.Y. (Nov. 30).

32 Congressional Research Service, Defense Primer, p. 1; and Department of Defense, quoted in ibid., p. 1.

33 Department of Defense, quoted in ibid, p. 1; and ibid., p. 1.

34 Noel Sharkey, “Guidelines for the Human Control of Weapons Systems” (working paper, International Committee for Robot Arms Control, April 2018).

35 Afonso Seixas-Nunes, “Autonomous Weapons Systems and the Procedural Accountability Gap,” Brooklyn Journal of International Law 46 (2021), pp. 421, 423.

36 Silvano Tomasi, quoted in Cindy Wooden, “Vatican Official Voices Opposition to Automated Weapons Systems,” Catholic News Reporter, May 14, 2014, www.ncronline.org/news/justice/vatican-official-voices-opposition-automated-weapons-systems?site_redirect=1.

37 Tomasi, quoted in ibid.

38 Schwarz, “The [Im]possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems.”

39 A former legal adviser of the U.K. Foreign & Commonwealth Office, Daniel Bethlehem, argued that unless the law prohibiting the use of force was made weaker to accommodate states they would simply ignore it. Bethlehem, Daniel, “Self-Defense Against an Imminent or Actual Armed Attack by Nonstate Actors,” American Journal of International Law 106, no. 4 (October 2012), pp. 770–77CrossRefGoogle Scholar.

40 For a history of how IHL has been consistently diluted to accommodate new military weapons technology, see Chamayou, Grégoire, A Theory of the Drone, trans. Lloyd, Janet (New York: New Press, 2015)Google Scholar.

41 Paul Scharre, “AI's Gatekeepers Aren't Prepared for What's Coming,” Foreign Policy, June 19, 2023, foreignpolicy.com/2023/06/19/ai-regulation-development-us-china-competition-technology/.

42 On supporters of a ban, see note 6.

43 “It's a Machine's World,” On the Media, January 13, 2023, www.wnycstudios.org/podcasts/otm/episodes/on-the-media-its-a-machines-world.