Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective

Abstract Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages but the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce. This contribution argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility. The following analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) – the principle of distinction – resulting from the use of AI-enabled military technologies. It reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder. The article reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL. It is proposed, however, that should the so-called fully autonomous weapon systems – that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework – ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs.


Introduction
The adoption of AI in military practice has been described as the third revolution in military affairs, after gunpowder and nuclear weapons. 1 Despite having being envisioned over four decades ago, military AI caught both the academic and political centres of the international community off guard. Nowhere is this bewilderment more self-evident than in the international law realm, where heated debates continue on whether the existing legal frameworks, in particular international humanitarian law (IHL), are sufficient to account for the drastic change that military AI is expected to bring to warfare. In late 2019, the Group of Governmental Experts (GGE)established by the state parties to the Convention on Certain Conventional Weapons to work on the challenges raised by lethal autonomous weapons systems (LAWS)produced a list of 11 tentative 'Guiding Principles' but failed to reach agreement on the very definition of LAWS or the concept of 'autonomy' with regard to such systems. 2 While many questions remain unanswered, two aspects are clear. First, there is no turning back on the defence and security potential of AI, already seen in military circles as a pervasive technology. 3 Second, and equally importantly, AI-enabled military technology goes beyond LAWS, and is already seen in armed conflicts in the form of, inter alia, risk-assessing predictive algorithms used in a variety of military systems. 4 The global political landscape suggests that a comprehensive prohibition of either LAWS or AI-enabled military technology is not likely to be adopted in the foreseeable future. 5 Yet, given the significant technological advances of the last years, a steady increase in the integration of AI in military systems is inevitable. 6 Sooner or later, such systemsjust like all other weaponrywill malfunction and result in, inter alia, injuries to civilians (as system malfunctions are inexorable in complex, coupled systems), bringing to the fore 1 On the strategic importance of AI, see Rod Thornton  3 North Atlantic Treaty Organization, 'Summary of the NATO Artificial Intelligence Strategy', 22 October 2021, https://www.nato.int/cps/en/natohq/official_texts_187617.htm. 4 For an overview of the existing military AI, and its future utility across military activities, see Daniel S Hoadley and Kelley M Sayler, 'Artificial Intelligence and National Security', Congressional Research Service (CRS), 10 November 2020, R45178, 9-15. 5 In mid-2021, the International Committee of the Red Cross (ICRC) put forward a position that at least anti-personnel autonomous weapons system should be banned but, so far, this proposal has not been enthusiastically received by states: ICRC, 'Position on Autonomous Weapon Systems', 12 May 2021, 2, https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems. 6 Forrest E Morgan and others, 'Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World' (RAND Corporation 2020) xii. the question of who is responsible for them. 7 While it is widely accepted that IHL fully applies to the use of AI-enabled technology, 8 the issue of accountability for IHL violations resulting from the use of such technology remains highly contentious. In fact, for over a decade now, international legal scholarship has grappled with the ostensible 'responsibility gap' 9 that AI-enabled military technology, in general, and LAWS, in particular, would create. 10 A large part of the debate has centred on the challenges of holding individuals responsible for war crimes perpetrated 'by' AI. 11 Some attention has been devoted to the idea of 'attributing electronic personhood to robots' 12 but, in the more contemporary literature, holding arms manufacturing corporations accountable seems to be gaining more traction than far-fetched attempts to ascribe blame to machines. 13 Somewhat surprisingly, state responsibility, in turn, has been subject to rather cursory treatment. 14 A few scholars asserted that the conduct of fully autonomous machines could not be attributed to states 7 For international responsibility purposes, a simple 'mechanical' malfunction of a weapon should be distinguished from a failure in design. The first is usually considered force majeure and, as such, excuses the wrongfulness of resulting conduct. An  in the absence of direct and effective control over their conduct, 15 while some reached the opposite conclusion, 16 often without a comprehensive examination of relevant modalities. 17 The indifference towards state responsibility as a regime relevant in the context of LAWS seems to be ending, though. In the report of its 2022 session, the GGE on LAWS paid heed to its role by stressing that: 18 every internationally wrongful act of a state, including those potentially involving weapons systems based on emerging technologies in the area of LAWS entails international responsibility of that state, in accordance with international law. … Humans responsible for the planning and conducting of attacks must comply with international humanitarian law.
This article elaborates on this acute GGE premise and demonstrates how the regime of state responsibility applies to the scenario most feared by the opponents of LAWSthat is, a mistaken attack on civilians committed by a state's armed forces using AI-enabled military technology. It demonstrates that while some legal aspects of AI in the military context remain to be settled, AI has not been developing in 'a regulatory vacuum' as frequently purported in the literature, 19 and while the 'responsibility gap' exists, it is not where most of the commentators assume it is. The discussion proceeds as follows. Section 2 explains the concept of military AI and distinguishes between (i) already existing AI-powered weapon systems, referred to in this article simply as AWS, and (ii) future potential fully autonomous weapon systems (FAWS). 20 It further sets out an imaginaryalbeit modelled on already fielded projectsbifurcated scenario in which both types of system contribute to making civilians the object of an attack in the midst of an armed conflict. The following two sections examine the relevant primary rules (which establish obligations incumbent on states) and secondary rules (which regulate the existence of a breach of an international obligation and its consequences). Section 3 inquires whether mistaken attacks on civilians violate the principle of distinction, and demonstrates that any challenges in holding states accountable for the harm caused by AI-powered systems will stem from pre-existing systemic shortcomings of the applicable primary rules, in this case IHL, rather than the incorporation of AI as such. Section 4 examines how the secondary rules of state responsibility apply to wrongdoing caused by both today's AWS and future FAWS, should those ever be fielded. Section 5 concludes by offering some tentative solutions for the identified loopholes and indicating avenues for further research.
A few clarifications are required before venturing into the discussion. First, starting from the premise that individual and state responsibility are complementary and concurrent, 21 this article steers clear from delving into a discussion of which regime is preferable in relation to the harm resulting from the use of AI in the military context. 22 Second, it is not the intention of the article to reopen the controversial debate over the concept of 'international crimes of states' and the criminalisation of state responsibility to which it could arguably lead. 23 The following analysis pertains solely to what has been referred to as a 'plain, "vanilla" violation of IHL', 24namely, a violation of the principle of distinction as such, not the war crime of intentionally directing an attack against civilians. 25 Finally, because of its limited scope, the article focuses on the specific problem of post facto attribution of an internationally wrongful act to a state and does not aspire to provide an exhaustive examination of all aspects of state responsibility in relation to AI in the military domain. 26 In particular, it does not discuss the pre-deployment obligations of states to ensure the compliance of a new weapon, means or method of warfare with IHL, which it leaves to other commentators.

Overview of military AI
In the absence of a universally accepted definition of AI, many contemporary analyses, position papers and policies on its role in the military domain adopt a simple understanding of AI as 'the ability of machines to perform tasks that normally require human intelligencefor example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking actionwhether digitally or as the smart software behind autonomous physical systems'. 28 While research into AI arguably had started in the 1940s, 29 the 'AI hype', which began in the early 2010s and still continues, is often associated with three interlinked developments: (a) the increasing availability of 'big data' from a variety of sources; (b) improved machine learning algorithms and approaches; and (c) spiking computer processing power. 30 An explosion of interest in the military applications of AI, dubbed already by some an 'AI arms race', 31 started some time in the late 2010s after China's State Council released a grand strategy to make the country a global AI leader by 2030, and President Vladimir Putin announced Russia's interest in AI technologies by stating that 'whoever becomes the leader in this field will rule the world'. 32 Unsurprisingly, soon thereafter the United States designated AI as one of the means that will 'ensure [the US] will be able to fight and win the wars of the future'. 33 Similar sentiment has been echoed among members of the North Atlantic Treaty Alliance. 34 With the increased buzz around military AIfuelled by the Campaign to Stop Killer Robots 35public debate often overlooks that autonomy or automation 36 has been incorporated into various military systems for 28 35 Campaign to Stop Killer Robots is a coalition of non-governmental organisations (NGOs) lobbying internationally for a preventive ban on LAWS; see more at https://www.stopkillerrobots.org. 36 As rightly pointed out in ICRC, 'Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control', August 2019, 7, there is 'no clear technical distinction between automated and autonomous systems'. The terms 'automatic' or 'automated' are often used to refer to rule-based systems that mechanically respond to environmental input; see the discussion in decades. 37 In fact, human-machine teaming has been a component of modern warfare at least since the First World War, 38 but '[t]raditionally, humans and automated systems have fulfilled complementary but separated functions within military decision making'. 39 What the recent advancements in AI technology facilitate is merely a more synchronised, or even integrated, functioning of humans and technology. This, in turn, allows for AI to be incorporated into both selected components of military planning and operations (such as logistics, maintenance, medical and casualty evacuation) as well as into complex C4ISR systems. 40 Furthermore, AI is already proving to be particularly useful in intelligence, where the ability to comb through a large amount of data and automate the process of searching for actionable information may translate into immediate tactical advantage on the battlefield. As demonstrated by the 2021 Israeli Operation Guardian of the Walls, considered by some as an 'AI-Enhanced Military Intelligence Warfare Precedent', AI-powered intelligence gathering and analysis may even lead to a new concept of operations. 41 It is against this background that the ongoing debate on LAWS, mentioned in the opening of this article, should be viewed. It is worth noting at the outset Paul Scharre and Michael C Horowitz, 'An Introduction to Autonomy in Weapon Systems: A Primer', Center for a New American Security, February 2015. 'Autonomy', in turn, can be described as 'a capability (or set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, "self-governing"'; see more in US Department of Defense, ' Task  Lessons from IDF's Operation "Guardian of the Walls"', Frost Perspectives, 9 June 2021, https://www.frost.com/frost-perspectives/ ai-enhanced-military-intelligence-warfare-precedent-lessons-from-idfs-operation-guardian-of-thewalls ('[The IDF's Intelligence Division] established a comprehensive, "one-stop shop" intelligence war machine, gathering all relevant players in intelligence planning and direction, collection, processing and exploitation, analysis and production, and dissemination process (PCPAD) into what it termed "intelligence molecules". During the recent conflict, massive AI machinery for Big Data Analytics provided support at every level-from raw data collection and interception, data research and analysis, right up to strategic planning-with the objective of enhancing and accelerating the entire process, from decision-making about prospective targets to the actual carrying out of attacks by pilots from F-35 cockpits').
that the discussions of LAWS on both political and scholarly fora have been obfuscated by overhyped narratives and misunderstandings of the existing technologies, both of which feed into the lack of a universally accepted definition of LAWS, sometimes also alarmingly called 'killer robots'. 42 A closer look at the paper trail of GGE on LAWS shows, however, an emerging realisation of a conceptual (and in the future possibly also normative) distinction between: 43 • the existing systems incorporating various degrees of automation; and • the still-to-be-fielded lethal machine learning-based systems capable of changing its own rules of operation beyond a predetermined framework.
For the sake of conceptual clarity, the first category will be referred to in the following analysis as autonomous weapon systems (AWS), and the second as fully autonomous weapon systems (FAWS).
The two most-often referenced conceptualisations of AWSthat is, by the International Committee of the Red Cross (ICRC) and the United Statesboth reflect the ongoing fusion of AI into the military targeting cycle, and define AWS as weapons systems that, after being activated by a human operator, can 'select' and attack/engage targets without 'human intervention' (ICRC), 44 or 'further intervention by a human operator' (US). 45 Target 'selection' is often misunderstood in legal scholarship and perceived as the weapon's ability to choose targets freely, resulting in 'the removal of human operators from the targeting decision-making process'; 46 this is incorrect. Target selection is the process of analysing and evaluating potential threats and targets for engagement (attack). The final decision point before a target is destroyed is known in military parlance as engagement. The existing AWS utilise a variety of automated target recognition (ATR) systems, first developed back in the 1970s, which employ pattern recognition to identify potential threatscomparing emissions (sound, heat, radar, radio-frequency), appearance (shape and height) 42 The nomenclature used in state positions, NGO position pieces and academic scholarship is perplexing, as different actors often use a given term or even a definition to refer to systems with different technical specifications. or other characteristics (trajectory, behaviour) against a human-defined library of patterns that correspond to intended targets. 47 Even the most advanced versions of AWS thus 'select' specific targets from a human pre-defined class or category. One of the most widely employed examples of such technology are close-in defence weapon systems (CIWS), developed to provide defence for military bases, naval ships (like the Dutch Goalkeeper or American Phalanx) or other geographically limited zones (such as the Israeli Iron Dome or David's Sling). 48 A CIWS identifies incoming threats and determines the optimal firing time to neutralise the threat in a way that maximises protection in situations that require an almost immediate decision, and where threats come at a volume and speed which would overwhelm human capacity.
FAWS, in turn, are more elusive, which seems understandable given that they do not exist. As a prospective feature of weapon systems, 'full autonomy' seems to be conceptualised as a capability to change rules of operation beyond a predetermined framework coupled with the impossibility of terminating target engagement. 49 For many military experts, FAWS understood as such are pure fantasy, 50 but as the fear of such systems being fielded persists, this article entertains such a possibility for the sake of analysis.
While rarely addressed explicitly, the true concern of military AI in general, and weapon systems in particular, is using it directly against human targets, which according to some would lead to the 'responsibility gap'that is, a situation in which no entity could be held responsible for the wrong done. 51 In the context of targeting, the problem can therefore be conceptualised as follows. How does the 'outsourcing' of certain elements of the targeting process 52 to an AI-powered system affect accountability for potential misidentification of a human target? In essence, the core issue is what happens when military AI contributes to inadvertently making civilians the object of an attack. Any work on anti-personnel CIWS is presumptive and classified, but imagine the following scenario, to exemplify the problem: State Alpha is fighting armed group Beta, which is under the effective control of another state, in a foreign territory. As part of this international armed conflict, Alpha operates a military base near the front line (in the said foreign territory), guarded by a CIWS capable of intercepting both material and human incoming threats. The CIWS is programmed to identify threats based on whether they are carrying a weapon, are within a defined perimeter of the base and exhibit indicators of hostile intent (such as failing to heed warnings or avoiding roads), and is programmed to exclude friendly forces (such as those wearing allied uniforms). One evening, Beta attacks a village located a few kilometres from the base. Many civilians flee from the village and carry machetes for protection against Beta fighters pursuing them. As these civilians approach the base after dark through the fields, the CIWS identifies them as a potential target. An Alpha commander orders the initiation of perimeter defence protocols, including lights and audio warnings. The fleeing villagers do not heed the warnings and continue to proceed towards the base. This is a crucial junction for the purposes of the ensuing analysis. The scenario is therefore bifurcated from this point onwards: • The Alpha commander requests optical confirmation of potential threats from human sentries, who confirm the hostile intent of the approaching group. The commander thus relies on the CIWS's suggestion and authorises the engagement of the approaching group (Variant I); • a fully autonomous CIWS, which does not require separate confirmation engagement, fires at the villagers (Variant II).
In both variants the approaching civilians, who are not directly participating in hostilities, are the direct object of the attack. Can either of the variants be classified as a violation of the principle of distinction? This is examined in the following section. 55 As opposed to being collateral damage of an attack directed at a military objective, the legality of which is evaluated under the rule of proportionality. On the incorporation of AI into systems the determination of whether all feasible precautions were taken, and especially whether the target was verified before launching the attack. In other words, potential violations of the principle of distinction frequently go hand in hand with prima facie violations of the principle of precaution, aptly characterised as 'the procedural corollary to the general obligation to distinguish civilians and civilian objects from military objectives'. 56 Commercial airline shoot-down incidents by surface-to-air missilessuch as the 2020 Iranian downing of the Ukraine International Airlines Flight 752, the 2014 Malaysia Airlines Flight 17 shoot-down over Eastern Ukraine, and the 1988 downing of Iran Air Flight 655 by the US 57are probably the most apparent examples, but air-to-surface attacks on civilians, by both manned and unmanned aircraft, happen even more frequently. 58 As the CENTCOM Investigation Report on the 2015 US air strike on the Médecins Sans Frontières Hospital in Kunduz exemplifies, unintentionally attacking civilians resulting from a failure to take all feasible precautions is commonly considered a violation of both the principle of precaution and the principle of distinction, often accompanied by a breach of the rules of engagement (RoE). 59 What about so-called 'honest and reasonable' 60 mistakes that result in attacks on civilians, much like in Variant I of the illustrative scenario outlined in Section 2? It is plausible for a commander to comply fully with the duty to take all feasible precautions (including verification of the target to the extent 60 Milanovic (n 24) usefully conceptualises 'honest and reasonable mistakes' within the context of targeting as 'after having taken all feasible precautions and measures to verify the nature of the target, an attacker pursues that target while honestly believing that he is attacking combatants/ military objects, but it later transpires that in fact the target was civilian'. possible, given its urgency and ostensibly hostile intent), 61 follow the RoE to the letter, and yet end up making protected civilians, rather than combatants or civilians directly participating in hostilities, the object of the attack. Is such an attack a violation of the principle of distinction? It has been argued in scholarship that the existing state practice of providing compensation in such cases on an ex gratia basis without admitting legal responsibility suggests that honest and reasonable mistakes resulting in attacks on civilians 'are not regarded as violations of IHL'. 62 This argument seems feeble, as many obvious IHL violations are compensated on exactly the same basis, 63 making it impossible to determine, based on the manner of payment, whether an incident was lawful. With state compensation policies fraught with ambiguities, it is worth looking at the issue from a more theoretical perspective, and inquiring whether the principle of distinction includes an embedded subjective element (and then at least some mistakes of fact would preclude the wrongfulness of its violations). 64 If not, is it a purely objective rule, the breach of which remains wrongful even if mistaken? Some scholars maintain that the principle of distinction, as opposed to the grave breach of wilfully attacking civilians, is expressed in clearly objective terms. 65 The explicit inclusion of a mens rea requirement in listed grave breaches, so the argument goes, confirms the objective nature of the basic 61 In RoE of state parties to AP I, immediateness and hostile intent are usually the conditions that estop the presumption of civilian status as set forth by AP I (n 7) art 50(1). Such an interpretation seems a reasonable middle ground between troops protection and presumption of civilian status, as explicitly underlined in United Kingdom, 'Declarations and Reservations Made upon Ratification of the 1977 Additional Protocol I', 28 January 1998, para h (specifying the presumption of civilian character as applicable only 'in cases of substantial doubt still remaining after the assessment [of the information from all sources which is reasonably available to military commanders at the relevant time] has been made, and not as overriding a commander's duty to protect the safety of troops under his command or to preserve his military situation, in conformity with other provisions of [AP I]'). 62 Milanovic (n 24). 63 The 2015 air strike on Kunduz MSF Hospital, for instance, despite being, according to many, an obvious violation of IHL, was also compensated on an ex gratia basis, termed in US military parlance 'condolence payments'. On the payments in this and other cases see Joanna Naples-Mitchell, 'Condolence Payments for Civilian Casualties: Lessons for Applying the New NDAA', Just Security, 28 August 2018, https://www.justsecurity.org/60482/condolence-payments-civilian-casualties-lessons-applying-ndaa. Analysis of practice reveals that ex gratia payments are used, at least by some NATO member states, simply to remedy the harm caused for reasons of political expediency, irrespective of whether it was a violation of international law. For more see Amsterdam International Law Clinic, 'Monetary Payments for Civilian Harm in International and National Practice', October 2013, https://ailc.uva.nl/binaries/content/assets/subsites/amsterdam-international-law-clinic/reports/ monetary-payments.pdf. 64 Even the responsibility regimes that recognise a mistake of fact as a justification or excuse put forward some conditions that such mistakes ought to meet, the most conspicuous example being the Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187 UNTS 90, art 32(1) ('A mistake of fact shall be a ground for excluding criminal responsibility only if it negates the mental element required by the crime'). 65 This argument has been put forward in Lawrence Hill-Cawthorne, 'Appealing the High Court's Judgment in the Public Law Challenge against UK Arms Export Licenses to Saudi Arabia', 29 November 2018, https://www.ejiltalk.org/appealing-the-high-courts-judgment-in-the-public-principle of distinction, as worded in AP I, Articles 48, 51(2) and 52(2); AP II, Article 13(2), as well as Rule 1 of the ICRC Customary IHL Study. 66 Others assert that 'the concept of directing attacks implies some level of intent, and that an honest and reasonable mistake of fact could negate that element of intent'. 67 It is the latter position that finds support, even if only modest, in jurisprudence and other forms of state practice. 68 Non-criminal case law on the nature of the principle of distinction remains meagre, 69 but in at least two separate cases the adjudicating bodies held quite unequivocally that the principle of distinction indeed includes a subjective element. In the 2005 Partial Award on Western and Eastern Fronts, the Eritrea-Ethiopia Claims Commission (EECC) found that: 70 [a]lthough there is considerable evidence of the destruction of civilian property by Eritrean shelling, … the evidence adduced does not suggest an intention by Eritrea to target Ethiopian civilians or other unlawful conduct. … [T]he Commission does not question whether this damage did in fact occur, but rather whether it was the result of unlawful acts by Eritrean forces, such as the deliberate targeting of civilian objects or indiscriminate attacks.
This view was echoed in the 2017 UK High Court ruling on the legality of arms exports to Saudi Arabia, in which the Court held that '[t]he "Principle of Distinction" prohibits intentional attacks against civilians'. 71 While the latter law-challenge-against-uk-arms-export-licenses-to-saudi-arabia/#more-16674, and supported by a few scholars in the comments section. can arguably be considered obiter dictum, 72 the EECC's pronouncements are particularly pertinent to the issue at hand given the Commission's exceedingly rare mandate to adjudicate state responsibility for IHL violations. 73 Furthermore, some state practice reads intent into the principle of distinction as reflected in the official positions of, inter alia, Israel, 74 New Zealand, 75 and the United States, 76 and the lack of apparent contrary state practice and opinio juris.
Conceptually, not treating honest and reasonable mistakes resulting in targeting civilians as violations of IHL goes hand-in-hand with the so-called 'Rendulic rule' 77 (assuming it applies to the determination of a breach in non-72 As specified by the Court of Appeal, the core dispute before the Divisional Court concerned the definition of 'serious violations of IHL' and, in particular, whether it was synonymous with 'war crime', rather than the plain principle of distinction: 75 New Zealand, Manual of Armed Forces Law, vol 4, para 4.5.2 ('The obligation is dependent upon the information available to the commander at the time an attack is decided upon or launched. The commander's decision will not be unlawful if it transpires that a place honestly believed to be a legitimate military target later turns out to be a civilian object'). 76 US Department of Defense, Law of War Manual, para 5.4.3 ('Persons who plan, authorize, or make other decisions in conducting attacks must make the judgments required by the law of war in good faith and on the basis of information available to them at the time. For example, a commander must, on the basis of available information, determine in good faith that a target is a military objective before authorizing an attack against that target'). 77  criminal contexts), and the ancient legal maxim impossibilium nulla obligatio est. 78 Given all of the above, it is defensible to assert that directing attacks against civilians based on an honest and reasonable mistake of fact is not a violation of the principle of distinction. Assuming, for the sake of analysis, that in Variant I of the illustrative scenario no other precautions could feasibly have been taken, IHL was not breached. Without a breach no responsibility can possibly arise, without any further analytical finesse. Note, however, that the resulting 'responsibility gap' is anchored in IHL and is in no way affected by the introduction of the AI element into the equation. Whether the erroneous information as to the protected status of the target and the ensuing mistake can be traced back to the most advanced AI-powered system or human sentries is legally irrelevant; what matters is whether the commander complied with the obligation to verify the target before engagement. This is the core difference between Variant I and Variant II. In the latter, quite clearly not everything practically feasible was done to verify the target, and a breach of both the principle of precaution and the principle of distinction did take place. Is it, however, attributable to state Alpha given that the attack was executed entirely independently by a FAWS? After briefly introducing the regime of state responsibility, the next section substantiates why the answer is in the affirmative. that is, conduct which can consist of either action or omissionhas two elements: first, it needs to be attributable to the state under international law; second, it must constitute a breach of an international obligation of that state. 83 These seemingly straightforward rules have a variety of consequences.
In particular, unlike many domestic liability regimes, international responsibility of states is not premised on causation. 84 Instead, the crux of the whole regime is in the rules of attribution, conceptualised as 'a pure result of the law' pursuant to which 'a will or an act are attributable to a given subject only because a legal provision says so'. 85  How do all these principles apply to Variant II of the scenario described in Section 2that is, a breach of the principle of precaution and distinction resulting from conduct executed entirely independently by a FAWS? Is it attributable to state Alpha, making it internationally responsible for the internationally wrongful act of killing the civilians? The answer is simply 'yes'. Despite (F)AWS being frequently (and erroneously) anthropomorphised, under the plain reading of IHL it is those who 'plan or decide upon the attack' who are obliged to comply with the rules relating to conduct of hostilities. As rightly pointed out in recent scholarship, if those who decide upon the attack 'cannot foresee that an AWS will engage only legal targets, then they cannot meet their obligations under the principle of distinction (API, 86 ARSIWA (n 79) Commentary to art 2 para 10 ('In the absence of any specific requirement of a mental element in terms of the primary obligation, it is only the act of a State that matters, independently of any intention'). 87 ibid para 5 ('an "act of the State" must involve some action or omission by a human being or group'). 88 ibid Commentary to chapter II para 2.
article 57(2)(a)(i))'. 89 It is the commander who is 'ultimately responsible for accepting risk' 90 and 'in all cases … has the responsibility for authorizing weapon release in accordance with IHL'. 91 In Variant II, it was therefore the conduct of the Alpha commandernamely, the employment of FAWS in the first placethat was wrongful. Given that the commander undoubtedly constitutes an organ of state Alpha, their actions are attributable to that state, whether or not they exceeded their authority or contravened instructions. 92 This suffices for establishing state Alpha's responsibility under ARSIWA; no culpability or causal link (between the physical action and the harm resulting from it) needs to be proved. The existing international law applicable to combat use of (F)AWS clearly reflects a premise often recalled by the critics of the alleged 'responsibility gap'namely, that '[n]o matter how independently, automatically, and interactively computer systems of the future behave, they will be the products (direct or indirect) of human behaviour, human social institutions, and human decision'. 93 Such an approach is obviously factually correct, but as the technology advances it is worth reflecting whether the commander and their conduct should remain the necessary link between the wrong caused by increasingly autonomous weapons and the responsibility of the state. As long as state responsibility hinges on the wrongful conduct of a commander, the latter would have to face disciplinary or even criminal charges for breaching IHL. In some cases, especially when the deployed FAWS, which passed the Article 36 AP I weapon review obligation, was particularly complex and hence difficult to understand, 94 and was deployed in a combat situation similar in all relevant respects to those for which it was tested but nonetheless malfunctioned, it seems unfair to have the commander face the military justice system. 95 The intuitive unfairness of such a burden carried by the commander raises the question of whether wrongful conduct resulting from the FAWS deployment could be attributed to the fielding state in another way, as it is beyond doubt that '[a] State should always be held accountable for what it does, especially for the responsible use of weapons which it delegated to the armed forces'. 96 Can the state simply be responsible for the weapons it fields? Interestingly, there is currently no general framework under international law that regulates state responsibility (or liability) for its inanimate objects; only self-contained, specific regimes existapplicable, for example, to space objects 97 or transboundary harm arising out of hazardous activities. 98 There seems to be, however, a possible alternative avenue under ARSIWA that would allow for attributing wrongful conduct (such as the targeting of civilians) of FAWS to the state fielding it, but it has never been utilised in practice. The following discussion is thus entirely de lege ferenda.
A careful reading of ARSIWA and its Commentaries indicates that FAWS could be construed as a state agent. 99 The category of 'agent', while nowhere to be found in ARSIWA themselves, is mentioned frequently in the Commentaries (albeit without a definition), usually in the phrase 'organs or agents'. 100 The term 'agent', often used in older arbitral awards of the early twentieth century, 101 was revived by the International Court of Justice (ICJ) in the Reparations for Injuries case in which the Court confirmed the responsibility of the United Nations for the conduct of its organs or agents, and underlined that it: 102 understands the word 'agent' in the most liberal sense, that is to say, any person who, whether a paid official or not, and whether permanently employed or not, has been charged by an organ of the organization with carrying out, or helping to carry out, one of its functionsin short, any person through whom it acts.
That definition was admittedly created with a human agent in mind, but there is nothing in iteither verbatim or analyticallythat would prevent its application to non-human persons, or simply objects, whether powered by AI or not. 103 Such an interpretation appears to be a relatively safe way forward, for two reasons. First, it does not compromise the integrity and coherence of the fundamental pillars on which ARSIWA are based. Second, it should not be controversial among states, which would simply bear responsibility for the wrongful conduct of their own objects, with the principles of attribution regulating the responsibility of state organs, applicable mutatis mutandis. In other words, it is suggested here that Article 4 of ARSIWA could be read to refer to 'organs or agents' and, as such, allow for the attribution of wrongful conduct caused by FAWS to the fielding state. Within the context of IHL, such an interpretation can be read in concert with Common Article 1 to the Geneva Conventions (I) to (IV), 104 the internal compliance dimension of which is firmly recognised as customary.
It is crucial to underline that the solution proposed herein to consider extending the category of agents to objects such as FAWS is strictly limited to the law of international responsibility of states for internationally wrongful acts. 105 In other words, viewing FAWS as agents is not meant to imply that they themselves could become subjects of international law, or be awarded some kind of moral agency in the ethical sense. 106

Tentative conclusions and way forward
Military AI and the delegation of tasks traditionally performed by humans to self-learning machines creates new challenges in a variety of fields, including on the international law plane. The goal of this analysis was to outline a counter-argument to those who lament the 'responsibility gap' that allegedly results from the employment of military AI on the battlefield. As demonstrated in the preceding sections, neither contemporary applications of AI nor their future 'truly autonomous' incarnations create any major conceptual hurdles under the law of state responsibility. The minor tweak thereto suggested here is intended merely to start the conversation on whether it is time to recognise that, in some cases, states should be internationally responsible for their objects, in a way similar to their responsibility for their organs. None of the above, however, should be read as implying that machines themselves can ever be held accountable. Nor does this article suggest that conceptualising agency in the realm of international responsibility as including objects is straightforward or constitutes a ready-made solution, either in general or for AI-enabled technologies in particular. On the contrary, further research is needed on at least three aspects.
First, it needs to be scrutinised what the mutatis mutandis application of attribution principles regulating the responsibility for state organs to nonhuman agents would entail.
Second, and related, it is important to bear in mind that ARSIWA are residual in nature, and, as such, can be displaced by a lex specialis regime, should they emerge. 107 Leaving aside the broader question of the normative desirability of lex specialis regimes of attribution, 108 an interesting inquiry could also be launched into whether state responsibility for the wrongdoings of its objectsmodelled on either the Latin concept of qui facit per alium facit per se 109 or strict liability for damage caused by animals that is present in many domestic jurisdictions 110could be conceptualised as a general principle of law within the meaning of Article 38(c) of the ICJ Statute. 111 Third and finally, more comprehensive research into the legal significance of mistakes of fact and fundamental principles of IHL is needed. The recent discourse remains so preoccupied with LAWS and the final stages of lethal targeting that the other uses of AI in military practice are largely overlooked. In particular, the growing incorporation of AI into intelligence, surveillance and reconnaissance (ISR) technologiescrucial for commanders' situational awareness and hence the proper application of all combat of hostilities rulesreceives surprisingly little attention in legal scholarship. Faulty or incomplete intelligence is the most frequent cause of incidents resulting in outcomes that IHL was established to prevent. Should incorrect intelligence provided by an AI-powered ISR system be classified as a mistake of fact or a technical error? Or perhaps it is a distinction without difference within IHL and those two categories are legally the same? Harm that results from a technical error is traditionally not considered a breach of IHL, but blankly treating all cases of malfunction of military AI systems as 'technical errors' could be troublesome from a policy perspective. 112 Alarming as LAWS and the idea of robots killing people can be, 113 it is worth recognising that military AI goes beyond the trigger-pulling phase of the targeting process and raises important issues which have concrete consequences for implementation of IHL. It might therefore be preferable to leave the science fiction of FAWS to H(B)ollywood directors and focus on the still-unsettled core IHL issues.