State responsibility in relation to military applications of artificial intelligence

Abstract This article explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies. While the question of how to attribute and allocate responsibility for wrongful conduct is one of the central contemporary challenges of AI, the perspective of state responsibility under international law remains relatively underexplored. Moreover, most scholarly and policy debates have focused on questions raised by autonomous weapons systems (AWS), without paying significant attention to issues raised by other potential applications of AI in the military domain. This article provides a comprehensive analysis of state responsibility in relation to military AI. It discusses state responsibility for the wrongful use of AI-enabled military technologies and the question of attribution of conduct, as well as state responsibility prior to deployment, for failure to ensure compliance of AI systems with international law at the stages of development or acquisition. Further, it analyses derived state responsibility, which may arise in relation to the conduct of other states or private actors.


Introduction
The question of how to attribute and allocate responsibility in case of wrongful conduct is one of the central contemporary challenges of AI. Due to their inherent characteristics, notably in terms of autonomy and unpredictability, advanced AI systems typically raise difficult issues of accountability, which have long been discussed in the fields of law, ethics of technology, and computer science. 1 Responsibility in relation to AI remains a prevalent issue in scholarly and policy debates, as advancements in AI research and the increasingly widespread use of AI technologies have highlighted the broad societal and policy implications of AI. 2 In the military context, discussions surrounding AI technologies have primarily focused on autonomous weapons systems (AWS), colloquially referred to as 'killer robots'. International legal scholarship on the topic has questioned whether AWS could be able to be used in compliance with international humanitarian law (IHL), in particular with the principles of proportionality and distinction, and whether the deployment of AWS could lead to a responsibility gap at the individual level. 3 By contrast, the question of state responsibility under international law 4 in relation to AWS and other AI-based technologies has remained relatively underexplored. Several authors have touched upon aspects of state responsibility and indicated that it is a useful mechanism to ensure compliance with international law and accountability in relation to AI, 5 but did not extensively analyse how the law of state responsibility applies to violations of international law caused with AI-enabled military technologies.
Although relatively underexplored in scholarship, state responsibility has an important role to play as part of an overall framework of accountability in relation to military AI. Indeed, upholding the responsibility of collective actors such as states acknowledges the structural forces that drive the development and use of AI. 6 State responsibility is a particularly important perspective in relation to military AI, because states constitute the primary structures within which such AI systems are developed, regulated, and deployed. While some authors have argued that state responsibility is not a satisfactory way to address accountability in relation to AWS,7 or conversely that state responsibility is 'preferable' to individual liability, 8   comports some specific features that are particularly relevant to address accountability in relation to AI. One is that state responsibility entails not only an obligation to provide reparation, but also an obligation to cease the wrongful conduct and to offer appropriate assurances and guarantees of non-repetition. 10 Another is that state responsibility can arise prior to the deployment of military AI, as the framework also applies at the stages of development or procurement of military technologies. 11 State responsibility is thus not only relevant for ex post liability for wrongful conduct on the battlefield, but more generally for the international governance of AI throughout its lifecycle.
The present article sets to comprehensively address issues of state responsibility that arise as states increasingly integrate AI technologies in their military apparatus. The overarching research question is to determine in which circumstances a state can incur responsibility in relation to violations of international law involving military AI. Although the article discusses the topic in relation to military AI, many of the arguments advanced are relevant more broadly to the use and regulation of AI by states beyond the military context. Before delving into the analysis of state responsibility, Section 2 provides some necessary background on AI technologies and their military applications. Sections 3 to 5 explore three dimensions of state responsibility in relation to military AI. Section 3 addresses attribution of conduct involving AI, questioning whether and on which basis wrongful conduct occurring when AI-enabled systems are deployed on the battlefield can be attributed to the state for the purpose of international responsibility. Analysing the function and conceptual basis of attribution of conduct against the background of scholarship on the interaction of human agents and AI technologies, it proposes a framework for attribution of conduct involving AI. Section 4 analyses responsibility prior to deployment, at the stage of development or procurement of AI. It argues in favour of a compliance-by-design approach, under which states incur responsibility if they fail to ensure that AI systems are developed in compliance with their international obligations and designed in a way that embed these obligations. Section 5 completes the analysis by discussing grounds of derived state responsibility, which can potentially arise in relation to the conduct of other states or private actors which develop or deploy military AI. Section 6 offers concluding remarks and perspectives on how to operationalize the legal framework of state responsibility in relation to military AI.
2. Artificial intelligence in the military context, beyond autonomous weapons systems AI can be defined as computer systems able to perform tasks that traditionally only humans could perform, such as rational reasoning, problem-solving and decision-making. It is based on algorithms, which are sets of mathematical instructions aimed at performing a specific task. 12 AI technologies are being used in areas ranging from video games, finance, and online commercial targeting, to healthcare, public welfare policy, border control, and criminal justice. One of the most controversial applications of AI is in the military sphere. Fuelled by popular imaginary and fears of machines overtaking the battlefield, a broad public debate has been taking place on AWS, 13 commonly defined as weapons systems 'that, once activated, can select and engage 10 See ARSIWA, supra note 4, at Arts. 30 and 31. The legal consequences of state responsibility arise irrespective of a claim invoking responsibility, see ARSIWA commentaries, supra note 4, at 91 (Commentary to Art. 31 ARSIWA, para. 4). 11 See Section 4, infra. 12 S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach (2010), at 1-16; A. Rubel, C. Castro and A. Pham, Algorithms and Autonomy (2021), at 8. 13 See, in particular, the multilateral debates taking place in the context of the Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS) of the United Nations Convention on Certain Conventional Weapons (CCW), and the work of the 'Campaign to Stop Killer Robots', a coalition of non-governmental organizations actively lobbying for a ban on AWS. targets without further intervention by a human operator'. 14 The key characteristic of AWS, which is at the core of critical debates on autonomy in weapons and human control over AWS, is the inclusion of AI technology into weapons systems. 15 Indeed, AWS in their common understanding are weapons systems which embed algorithms specifically aimed at identifying military targets. However, current and potential applications of AI in the military domain are not limited to the relatively narrow category of weapons systems. As will be further elaborated on below, AI technologies are and could be integrated throughout the realm of military activities and in support of varied types of military functions. The term 'military AI' as used in this contribution includes AI-enabled weapon systems as well as various other potential uses of AI in the military.
AI technologies can be broadly classified under two methodological categories. Traditionally, AI has been developed with rule-based algorithms, where each instruction is explicitly programmed in lines of code, under the format 'if, then'. For instance, to develop an algorithm that can recognize images of apples, the programmer would encode each and every detailed characteristic and feature that identifies an apple (e.g., its shape, colour, how it differentiates from a pear or a peach, etc.). This technique has the advantage of being fully explainable and predictable, but it is limited in what can be achieved as it is impossible to define and code every possibility. 16 More recently, data-driven machine learning techniques have been developed to increase and broaden the capabilities of AI systems. Instead of explicit instructions, machine learning algorithms are developed by providing a computer with large sets of data (in our example, images of apples and other fruits) and letting the system identify potential correlations through a process known as training. Applying statistical analysis to the data provided, the algorithm identifies and generalizes common features and recurring patterns, and thereby can 'learn' by itself how to recognize apples. 17 Machine learning techniques have enabled major breakthroughs in recent years, with AI systems able to perform a growing range of tasks including object or facial recognition, language processing, strategic reasoning, and predictive analysis.
However, AI systems based on machine learning come with a number of novel and specific characteristics that raise critical ethical, legal, and policy challenges. First, machine learning models produce results that are usually not explainable. This is known as the 'black box' dilemma, whereby AI systems produce results that cannot be explained or justified by either developers or users. 18 Second, AI systems are, by purpose, operating with a degree of autonomy that can escape direct human control. Advanced self-learning algorithms further complicate the picture as they can exhibit unpredictable or unexpected behaviour. 19 Third, AI operates at a speed and scale that goes beyond human cognitive possibilities. It can process incommensurably vast amount of data in a split-second, so that, even if an AI system is formally under human control or supervision, its results are not intelligible to human operators. This characteristic of AI systems is known to lead to the issue of automation bias: when a human operator is vested with the formal role of approving or not recommendations made by a decision-support algorithm, they do not have the genuine capacity to evaluate the suggested outcome and therefore tend to simply follow the system recommendations. 20 Fourth, technology remains the produce of human choices, 21 and decisions made at the stage of AI design and development have significant implications. 22 Systems' requirements define the parameters within which an AI system operates and how it interacts with human operators. For instance, AI systems used for reasoning and decision-making are developed and trained against goals and values that are specified by programmers. 23 It is also from choices made during the design stage (e.g., which datasets are considered representative) that issues of discriminatory bias can arise. 24 Fifth, it is important to understand that AI technology is not intelligent in the way that humans are. Algorithms do not understand the results they produce, in the sense that they cannot understand what an apple is, and cannot interpret data and results. 25 The only thing AI can 'see' are pixels, and what it can perceive as an important characteristic might be insignificant or unreliable for a human observer. 26 This is also why image recognition algorithms are subject to spoofing through adversarial attacks that slightly modify an image to lead the algorithm to make mistakes. 27 Applied to the military, the current advancements in AI research can be used for navigation, for surveillance and recognition, in automated defence systems, to analyse satellite or drone imagery, to perform terrain, object, or facial recognition during operations, to estimate the potential damage of an attack, to enable robotic swarming, to detect and predict adversary conduct, anomalous activities, potential risks, and possible counterstrategies, and to provide decision support at the operational but also strategic level. The potential applications of AI in the military go far beyond purely operational matters involving targeting and offensive lethal operations, and AI technologies could be deployed throughout military planning and operations. 28

Recent military practice offers a number of examples of the integration of AI in diverse and complex ways. Project
Maven is an initiative of the US Department of Defense, that seeks to use AI to alleviate the cognitive burden of mundane repetitive tasks such as video analysis. It uses machine learning to process drone footage, produce actionable data, and enhance military decision-making. 29 AI can also be used to automate or delegate certain tasks that would be difficult or dangerous for human soldiers. For instance, the Maritime Mine Countermeasures (MMCM) programme seeks to develop an AI-enabled unmanned underwater anti-mine warfare system that can detect, identify, and neutralize naval mines. 30 Another example is the ViDAR Maritime system, which is used to perform wide area optical search to detect and classify objects at sea. 31 Other military applications of AI often seek to enable more rapid (and allegedly more accurate) decision-making in a time-critical environment. For instance, Project Convergence is a multi-platform project aimed at integrating AI in battlefield management systems. It consists in drones feeding real-time reconnaissance data to network algorithms which combine this data with other information to update digital maps, identify and prioritize potential targets, match threats to the best weapon, and send targeting information to fire control systems. The system thereby dramatically increases the speed at which ground operators can act on the basis of a wide range of data. 32 While such decision-support systems provide recommendations that are formally subject to approval by a human operator, the characteristics of AI described above indicate that human oversight over complex AI systems may be superficial in practice. 33 Other potential applications of AI include predictive algorithms in support of decision-making regarding detention 34 or combat medical triage, 35 facial recognition algorithms to help identifying war casualties, 36 and the use of AI to generate deepfakes as part of information warfare. 37 Thales Group, 'The Maritime Mine Countermeasures Programme: The French and British Navies Blaze the Trail Towards a Global First with Their Revolutionary Autonomous System', 13 September 2019, available at www.thalesgroup.com/en/ worldwide-defence/naval-forces/magazine/maritime-mine-countermeasures-programme-french-and-british; X. Vavasseur, 'French Navy's SLAMF Unmanned Mine Warfare System to be Qualified in December', Naval News, 10 August 2019, available at www.navalnews.com/naval-news/2019/08/french-navys-slamf-unmanned-mine-warfare-system-to-be-qualified-in-december/. 31 'Sentient Vision Systems selected for US FCT contract', Australian Defence Magazine, 16 March 2022, available at www. australiandefence.com.au/defence/sea/sentient-vision-systems-selected-for-us-fct-contract. 32 J. Lacdan, 'Project Convergence Aims to Accelerate Change in Modernization Efforts', Army News Service, 11 September 2020, available at www.army.mil/article/238960/project_convergence_aims_to_accelerate_change_in_modernization_efforts; S. J. Freedberg Jr, 'Target Gone in 20 Seconds: Army Sensor-Shooter Test', Breaking Defense, 10 September 2020, available at www.breakingdefense.com/2020/09/target-gone-in-20-seconds-army-sensor-shooter-test/; S. J. Freedberg Jr, 'Kill Chain In The Sky With Data: Army's Project Convergence', Breaking Defense, 14 September 2020, available at www. breakingdefense.com/2020/09/kill-chain-in-the-sky-with-data-armys-project-convergence/. 33 See Section 3.2, infra. 34 D. A. Lewis, 'AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention', Opinio Juris, 28 April 2020, available at www.opiniojuris.org/2020/04/28/ai-andmachine-learning-symposium-ai-in-armed-conflict-why-detention-humanitarian-services-maritime-systems-and-legal-advicemerit-greater-attention. Military applications of AI are broad and diverse, whereby AI is integrated in a highly distributed and subtle manner that does not always fit the narrative of AWS. Rather than a single AI system attached to a specific platform for a specific task, AI is set to become integrated in a variety of military tasks and at multiple stages of the decision-making chain. As a result, it becomes difficult to draw a clear line between 'lethal' and 'non-lethal' AI applications. For instance, data generated through an AI system initially used for mere surveillance might later become part of a targeting process.
In view of the unique characteristics of AI technologies, and the diverse and increasing use of these technologies in the military context, the following sections discuss the implications of AI in terms of state responsibility under international law.
3. State responsibility in the deployment of artificial intelligence systems: Attribution of wrongful conduct When the deployment of AI systems on the battlefield results in violations of international obligations, 38 the responsibility of the state can be engaged if it is demonstrated that the wrongful conduct in question is attributable to the state. 39 The notion of attribution of conduct is a cornerstone of the law of state responsibility. Essentially, attribution of conduct consists in attaching to the state the actions or omissions of individuals or entities acting on its behalf. 40 Indeed, it is an 'elementary fact that the State cannot act of itself', 41 and 'what constitutes an "act of the State" for the purposes of State responsibility' 42 necessarily results from the actions or omissions of human beings acting on behalf of the state. In order to develop a framework for attribution of conduct involving AI systems (Section 3.3), the following paragraphs embark in an analysis of the human dimension of attribution of conduct (Section 3.1), read against the background of scholarship on the interaction of human agents and AI technologies (Section 3.2).

The human dimension of attribution of conduct in the ARSIWA
The law of state responsibility unequivocally hinges upon actions or omissions by human beings. Attribution is based on the conduct of 'human beings', 43 'agents and representatives', 44 'persons', 45 'individuals'. 46 While the ARSIWA also occasionally refer to the conduct of a 'group of persons', 47 'corporations or collectivities', 48 or 'entities', 49 these concern collective entities that are constituted by human beings. The law of state responsibility does not envisage conduct other than the conduct of human beings, acting individually or collectively. 50 Under the ILC framework, the existence of human conduct is therefore a precondition for state responsibility. A harmful outcome that does not originates in an action or omission by one or more human being(s) cannot engage responsibility. 38 This section focuses on secondary rules of responsibility and does not address in detail the primary norms that can be potentially violated when using military AI technologies. Relevant norms are primarily found in the law of armed conflict and complementarily or subsidiarily in other regimes such as international human rights law. 39 See ARSIWA, supra note 4, Art. 2. 40 See ARSIWA commentaries, supra note 4, at 36 (Commentary to Art. 2, para. 12). 41 See ibid., at 35 (Commentary to Art. 2, para. 5). 42 See ibid., at 35 (Commentary to Art. 2 para. 5). 43 See ibid., at 38 (Commentary to Ch. II of Part I, para. 2) and 35 (Commentary to Art. 2, para. 5). 44 See ibid., at 35 (Commentary to Art. 2, para. 5). 45 Ibid. 46 See Ibid., at 40 (Commentary to Art. 4, para. 1). 47 See ARSIWA, supra note 4, at Arts. 8 and 9. 48 See ARSIWA commentaries, supra note 4, at 38 (Commentary to Ch. II of Part I, para. 2). 49 See ARSIWA, supra note 4, at Art. 5. 50 See ARSIWA commentaries, supra note 4, at 40 (Commentary to Art. 4, para. 1) and 49 (Commentary to Art. 8, para. 9).
Pushing this line of reasoning further, it can be argued that attribution relies on a causal link between actions or omissions by a human being and the occurrence of a breach of international law. Responsibility can be engaged if human conduct (that is attributable to the state) caused or contributed to cause a breach. If the operator of an AI system has no control over the outcome, if the machine operates to a large extent autonomously so that actions or omissions of human operators are not causally linked to the breach, it can be argued that there is no human conduct on the part of the operator to form the basis of attribution. It is therefore important to assess whether violations of international law caused with the use of AI systems can be traced back to human actions and omissions, and in turn to the state.
Some authors consider that the question of attribution in the context of AI is straightforward, 51 since, under the law of state responsibility, any and all conduct of state organs (e.g., armed forces) performed in their official capacity is attributable to the state, 52 whether or not AI is used. However, the critical aspect when it comes to AI and state responsibility is precisely whether there exists a 'human conduct' in the first place. Even though, unlike individual criminal responsibility for war crimes and other violations of IHL, 53 state responsibility is objective in nature and does not require demonstrating subjective intent or fault on the part of the state organ, 54 it relies nonetheless on a human element, which might lead to specific challenges in relation to AI technologies.
Since the operation of attribution of conduct relies on human conduct in order to attach violations of international law to a state, the question is whether and how the characteristics of AI systems, including autonomy, speed, unpredictability, and opacity, could affect or complicate the operation of attribution of conduct. In other words, what counts as 'human conduct' for the purpose of attribution when AI is involved?

What counts as 'human conduct' in socio-technical entanglements
The question of what constitutes (human) 'conduct' is not directly addressed in the law of state responsibility. 55 The identification of human conduct in relation to a breach is usually self-evident from the facts. However, the use of complex, decentralized, and increasingly autonomous technologies such as AI reconfigures the question, and requires us to analyse what qualifies as 'human conduct' in the context of state responsibility.
Decades of studies in philosophy and ethics of technology have demonstrated that AI technologies can affect human autonomy and reduce human control over outcomes. 56 In the debates on AWS, this concern is reflected in the idea that AWS should remain under 'meaningful human control'. 57 In particular, AI systems that integrate machine learning algorithmswhich generate  54 See ARSIWA commentaries, supra note 4, at 34 (Commentary to Art. 2, para. 3); Crawford, supra note 52, at 60-1. Certain primary norms specify a requirement of intent of the organ or agent through which the State is acting (e.g., Genocide Convention, Art. II). 55 Apart from being defined as 'an act or omission or a series of acts or omissions' (see ARSIWA commentaries, supra note 4, at 38 (Commentary to Ch. II of Part I, para. 1)). 56 Friedman and Kahn, supra note 1, at 7; Matthias, supra note 1, at 175; Johnson, supra note 21, at 195. predictions or recommendations on the basis of inference and patterns in data, at a scale and speed that the human brain cannot comprehend 58pose significant challenges. Direct operators can have limited control over the technology they use, while developers or decision-makers can be seen as having a too far-removed connection with possible harm occurring during deployment. 59 Besides, it is increasingly recognized that humans and technology interact as part of complex socio-technical entanglements, where the distinction between human conduct and machine conduct is not clear cut. 60 Specifically, the autonomous capabilities of AI systems can affect human behaviours and reshape human agency in relation to technological objects. 61 As a result, there is a strong argument to be made that certain conduct resulting from the use of AI systems do not qualify as action or omission of a human operator. For instance, when an AI system operates semi-autonomously under the formal supervisory control of a human operator, without the operator having any meaningful capacity to influence the outcome, it could be argued that there is no human conduct on the part of the operator. The more machines operate irrespective of the actions or omissions of direct human operators, the more it is difficult to convincingly argue that the conduct of human operators provides ground for attribution.
While the conduct of direct human operators can become less relevant in the context of AI, wrongful conduct involving AI can still and always be traced back to human choices and human actions or omissions. 62 In particular, human conduct in the form of decision-making at the stages of technology development, procurement, or strategic and operational planning can be particularly relevant for attribution of conduct involving AI. 63 For instance, the decision to adopt or deploy a given AI system without verifying its reliability can directly contribute to the occurrence of IHL violations on the battlefield.

A framework for attribution of conduct involving artificial intelligence
Having demonstrated that attribution of conduct hinges upon the existence of human actions or omissions which result in a violation of international law, and that the way human beings interact with AI technologies can reconfigure what counts as 'human conduct', the following section discusses whether and on which basis wrongful conduct involving AI can be attributed to a state. In essence, it is argued that attribution can be grounded in human conduct which contributed to cause a violation of international law, and that, increasingly, the actions or omissions of commanders, deciders, and developers, have more influence on the occurrence of violations than  the conduct of the operator of an AI system. Four main scenarios can be identified to draw the contours of a framework for attribution of conduct involving AI.
First, some AI systems operate under the direct and genuine control of an operator at the tactical level. For instance, if an object recognition system is used as a vision aid in a context where the operator also has direct visual perception, 64 the actions and omissions of the operator directly contribute to the outcome, and conduct in violation of international law can be attributed to the state on behalf of which the operator acts. 65 Second, certain AI systems can operate fully autonomously once they are activated. For example, air defence systems operating in automatic mode can, within limited parameters, identify and automatically fire at incoming threats. 66 In this scenario, human conduct that most clearly contributes to subsequent harm lies on the side of decision-makers who decided to deploy and activate the AI system. 67 Indeed, even if the system operator has the possibility to abort an attack, they have limited time to intervene and very limited situational awareness. As a result, the influence that human conduct in the form of override functions has over the outcome of becomes meaningless. 68 By contrast, the decision to deploy a system operating autonomously once activated, and the act of defining and circumscribing the parameters within which the system can launch attacks, can be causally linked to the outcome. Therefore, wrongful conduct occurring in the use of almost fully autonomous systems can be attributed to the state on the basis of the acts and omissions of decision-makers at the at the military or political levels. 69 The third scenario involves AI systems that operate in a grey area, under some degree of human control and supervision. Typically, such a system is formally under the direct control of its operator, who retains the decision-making capacity to follow or reject AI-generated recommendations. For instance, AI systems that are used in support of target acquisition can gather and analyse data from various sensors and sources, and suggest potential targets, while the operator ultimately remains in charge of the decision to launch or not an attack. 70 In such systems, a certain level of discretion is vested in the operator, who constantly assesses AI recommendations against their own judgement and degree of situational awareness. However, in practice, there is a very fine line between algorithmic decision-support and algorithmic decision-making. As discussed above, 71 AI systems function at a speed and scale that makes it difficult, if not impossible, for human operators to genuinely assess whether a given targeting recommendations should be followed. As a result, control of the operator over outcomes in the use of semi-autonomous systems may become superficial, and the actions and omissions of the operator may not provide sufficient ground for attribution. In that case, a stronger argument is to rely on the conduct of state organs who decided to adopt and deploy the system. Indeed, decision-makers are in a position to assess and enquire into capabilities, limitations, and risks of a system, and to exercise informed judgement over whether, in which operational circumstances, and under which degree of human control, the system should be deployed. Depending on the extent to which the operator exercised control over the outcomes, wrongful conduct involving human-supervised semi-autonomous systems can arguably be attributed pursuant to Article 4 ARSIWA on the basis of either the decision of state organs within the 64 military chain-of-command to make use of an AI system, or the actions and omissions of the systems operator.
Fourth, it cannot be excluded that future AI systems exhibiting higher degrees of autonomy could be developed, and that such systems be conceptualized as independent and endowed with a degree of autonomous agency. This article does not address the controversial and speculative question of whether such highly autonomous systems should be developed and used in the military context or beyond, nor whether it is ethically appropriate to conceptualize advanced AI as autonomous agents, 72 but seeks to prospectively explore the possible implications of highly autonomous AI systems for attribution of conduct. Should future advanced AI be developed and conceptualized as autonomous, independent, perhaps sentient entities, which could evolve and behave beyond human control, the link to any human conduct could be too vague and weak to ground attribution of conduct. 73 However, it could still be argued that conduct involving such AI could be attributed directly to the state, without the intermediation of human conduct. In this construction, advanced AI could be conceptualized as itself being an agent of the state, 'acting on the instructions of, or under the direction or control of, that State', 74 with wrongful conduct attributed pursuant to Article 8 ARSIWA. Article 7 ARSIWA, which provides that the conduct of an organ or agent can be attributed 'even if it exceeds its authority or contravenes instructions', could also be relevant if AI would be conceptualized as an autonomous agent. While relying on Article 8 ARSIWA allows to solve the problem of attribution for any conduct emanating from AI systems considered to be acting on behalf of the state, it is problematic as it implies that AI systems can and should be conceptualized as independent agents endowed with a degree of subjectivity. This indirectly supports the argument that AI systems could have moral agency or legal personality, which is highly debatable. 75 In the context of broader public debates on retaining human control and human agency over AI technologies, 76 the argument should rather be to make sure to preserve the role of humans, also as part of the framework of state responsibility.
In conclusion, concepts of state responsibility and rules of attribution of conduct are amenable to the challenges of allocation of responsibility in relation to military AI. There is no 'responsibility gap' 77 as far as state responsibility is concerned, and wrongful conduct occurring in the use of military AI systems can be attributed to the state. Nonetheless, the characteristics of AI reconfigure the operation of attribution of conduct, as relevant human conduct may be relocated from operators to decision-makers.

State responsibility at the stage of development or acquisition of military artificial intelligence: Compliance-by-design
In addition to state responsibility for the actual use of military AI that results in violations of international law, responsibility can also be analysed at the earlier stages of the design, development, or acquisition of military AI. Prior to deployment and actual harm, states can incur responsibility if they develop or acquire AI technologies in violation of their international obligations. 72 Whether our society can and should seek to develop AI systems able to act and develop in full autonomy is a critical debate. Russel warned that overly intelligent AI would pose existential threats to humanity (S. Russell, Human Compatible: AI and the Problem of Control (2020)). Bryson strongly advocated against assigning moral agency to technical artefacts (Bryson, supra note 21, at 15). 73 See also Castel and Castel, supra note 5, at 9. 74 See ARSIWA, supra note 4, at Art. 8. Pursuant to the second constitutive element of an internationally wrongful actbreach of an international obligationstate responsibility can only arise in case of a violation of an international obligation binding on the state, either stemming from treaty obligations or based on customary international law. While the breach of an applicable international obligation is always required, actual damage is not necessary. 78 Responsibility can thus arise even in the absence of any injury suffered. By contrast, damage that is caused by the state without involving a violation of international law does not engage international responsibility. The framework of state responsibility is therefore geared towards a return to legality and the preservation of the international legal order.
The constitutive nature of the element of breach thus means that responsibility can arise prior to the harmful use of military AI. Indeed, as international obligations apply during the development of military AI, responsibility can be engaged if such development involves a breach of international law. In this regard, a number of positive obligations are particularly relevant. In contrast to negative obligations not to engage in certain conduct (e.g., the obligation not to target civilians) that are more relevant at the stage of deployment, positive obligations prescribe a duty to actively take steps to secure certain rights and ensure compliance with the law.
The act of developing or purchasing AI technologies can itself qualify as an act of the state, and attribution of conduct at the stage of development or acquisition would not be subject to significant hurdles. When state organs or state-controlled entities engage in AI research and develop military AI technologies, this conduct is attributable to the state pursuant to Articles 4, 5, or 8 ARSIWA. 79 At the stage of development, state responsibility for an internationally wrongful act can therefore arise as soon as there is a breach of an applicable international obligation. The same applies to the acquisition from third parties of military technologies by state organs and entities, which also constitutes conduct attributable to the state.
Existing international legal standards apply to all activities of the state, and the novelty or complexity of military AI or other emerging technologies does not displace the applicability of international norms to state conduct involving AI. The ICJ clearly affirmed that 'the established principles and rules of humanitarian law : : : applies to all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future'. 80 The particular characteristics of AI technologies such as autonomy and opacity might require efforts to interpret how existing rules operate and can be implemented in the sphere of new technologies, but it is broadly agreed that the law remains applicable, 81 and its violations will give rise to state responsibility. In that sense, the idea that AI technologies are unregulated and that law is not able to catch up with the fast development of new technologies is misleading. Rather than seeking to adopt new legal frameworks to address technological advancements, it should first be explored how existing norms can be applied to emerging technologies. 82 Responsibility can therefore arise in relation to the way a state undertakes the development of military AI. Obligations of diligence applicable at the stage of development of military AI include 78 See ARSIWA commentaries, supra note 4, at 36 (Commentary to Art. 2, para. 9). 79 See also, arguing that private actors who supply AI systems for government decision-making should qualify as state actors under the domestic 'state action doctrine': K. Crawford and J. Schultz, 'AI Systems as State Actors', (2019) 119 Columbia Law Review 32. the duty to respect and ensure respect for IHL, 83 and positive obligations to take active steps to secure human rights within a state's jurisdiction. 84 The internal dimension of Common Article 1 notably refers to the training of armed forces to ensure they know and abide by IHL, 85 and involves a duty 'to take appropriate measures to prevent violations from happening in the first place'. 86 Applied to the development of military AI technologies, it implies a duty to ensure that AI can comply with IHL, to design and train algorithms in line with IHL standards, and to refrain from developing and adopting technology when it cannot be IHL-compliant. 87 In the military context, Article 36 of the Additional Protocol I to the Geneva Conventions furthermore provides for a specific obligation to determine, when considering the development or acquisition of a new weapons, means or method of warfare, whether its employment would, in some or all circumstances, be prohibited by any applicable rule of international law. 88 In the context of military AI, the scope of Article 36 arguably encompasses applications of AI that are not directly falling under the category of 'weapons', as such military AI systems can qualify as 'means or method of warfare'. 89 Equally relevant to the development phase of AI are obligations to protect and ensure respect for human rights, 90 which fully apply in the pre-deployment phase, and can address dualuse AI technologies that are not initially developed for a military purpose.
The applicability of existing obligations at the stage of development of AI technologies therefore means that AI must be designed in full compliance with applicable international norms, and that failure to do so can engage the responsibility of the state. In other words, this article argues that states have the duty to integrate international obligations from the outset and throughout the process of developing, training, and testing military AI. In line with the responsible innovation and ethics-by-design approaches, which seek to identify and integrate ethical values in the design of technology, 91 but appliedbeyond ethicsto binding legal standards, 92 design choices must reflect and incorporate the state's international obligations. Upholding state responsibility at the stage of AI development, prior to potentially harmful use, leads to a compliance-by-design 83 approach which fulfils a preventive aim. 93 In this perspective, state responsibility proves to be a particularly useful doctrine towards ensuring that military AI is developed and adopted in full compliance with the applicable international norms.
Integrating international law norms in the design of AI systems is not without difficulties, as certain principles such as proportionality and distinction may not be reducible to code. 94 Nonetheless, seeking to ensure compliance-by-design remains the applicable benchmark. The obligation to ensure compliance with international law by embedding norms in AI design can serve to identify the boundaries of whether and how technologies should be developed. Indeed, if it is found that a system cannot technically be made to comply with certain principles, then its development should be either halted or reframed in order to ensure sufficient human involvement to achieve compliance with these principles. Continuing to develop a certain technological system when it has been shown that it cannot comply with certain norms would result in a failure of diligence engaging state responsibility. 95 At the stage of procurement from third parties, there is similarly a duty to verify that AI technology has been designed and developed in line with the obligations of the state. Again, the complexity of technologies or issues of secrecy do not displace international obligations to ensure that new technologies can and will comply with the law. Wilful blindness is not an excuse for noncompliance, and the state has a duty to diligently seek information from the private or public actors from which it acquires technologies. 96 This also means that the state has an obligation to test systems acquired from third parties for compliance with IHL and other obligations.
In practice, the diligent development or acquisition and subsequent deployment of military AI in line with international obligations would involve processes of risk assessment, testing, auditing, certification, and continued compliance-monitoring mechanisms. 97 In order to operationalize frameworks of international responsibility in relation to military AI, further interdisciplinary research is needed to develop policy guidance and technical protocols for testing and certifying the compliance of AI systems with international law.
In view of the inherent unpredictability of certain AI technologies, as well as the risks of malfunction, a question that can be raised is whether a negligence model of responsibility is sufficient in the context of military AI, or whether the development and use of military AI should be subject to strict liability. 98 Under a strict liability model, actors can be held responsible even if some diligence was exercised, on the basis of the inherent high risk of certain otherwise lawful but hazardous activities. 99 The opportunity for a regime of strict liability for military AI should be discussed and debated amongst states and other parties, for instance in the context of the

State responsibility in relation to the conduct of other states and private actors
The third dimension of state responsibility analysed in this article concerns responsibility arising in relation to the conduct of other actors. Next to responsibility for a state's own conducteither at the stage of deployment or developmentexplored in the previous two sections, state responsibility can further arise in relation to the conduct of other actors, namely other states or private actors. In such situations of derived responsibility, a state is not directly responsible for the conduct of others, but it can bear responsibility for its failure to abide by its own obligations that relate to the conduct of other actors.
First, with regards to responsibility in relation to the conduct of other states, there exist general and specific obligations not to blindly facilitate or knowingly foster violations of international law by other states. 100 In the ARSIWA, Article 16 provides for an obligation not to knowingly aid or assist another state in the commission of a wrongful act. 101 Similarly, the external dimension of the duty to ensure respect for IHL embedded in Common Article 1 to the Geneva Conventions arguably includes an obligation not to aid or assist other states in violations of IHL, as well as a positive obligation to seek to ensure that other parties comply with IHL. 102 These obligations are particularly relevant in the context of multinational operations where several states engage together in military operations in which AI technologies might be used by some of the coalition partners. For instance, if a state provides to another state AI-based targeting acquisition support in the form of potential targets that need to be further verified and approved, and the other state engages in targeting on this basis without ensuring adequate human approval, the assisting state can bear derived responsibility in relation to wrongful targeting. Given that Article 16 ARSIWA and Common Article 1 are subject to a criterion of knowledge, respectively actual and constructive, 103 the responsibility of the state providing targeting support arises if it becomes aware of the wrongful conduct of the supported state, for instance due to recurrent targeting mistakes. Another illustrative example would be a situation where one state repeatedly deploys an AI system that results in biased outcomes, with other coalition states blindly allowing this conduct to continue despite becoming aware of the malperformance.
The prohibition of aid or assistance and the duty to ensure respect for IHL are also relevant in the context of the export of AI technologies by one state to another. A state not engaged in armed conflict could engage its derived responsibility if it knowingly transfers AI technologies, such as weapons, target acquisition software, or surveillance tools, to another state which uses them for international law violations. 104 In this respect, military AI technologies that can be used in weapons systems are also arguably subject to the specific rules of the Arms Trade Treaty (ATT), which apply to weapons as well as their parts and components. 105 Article 6 ATT provides for a negative obligation not to authorize any transfer of weapons, parts, or components if the state has knowledge that the items would be used in the commission of war crimes. For exports not prohibited as such under Article 6, Article 7 ATT imposes a positive obligation to assess the potential that the transferred items could be used to commit or facilitate serious violations of IHL or human rights.
Second, with regard to the conduct of private actors, a state can bear indirect responsibility if it fails to ensure that private actors within its jurisdiction operate with respect for international law. Under general international law, every state has a negative 'obligation not to allow knowingly its territory to be used for acts contrary to the rights of other States'. 106 In the field of human rights, the International Covenant on Civil and Political Rights (ICCPR) explicitly provides a positive obligation to 'take the necessary steps' towards ensuring respect for human rights. 107 Again, the state is not directly responsible for the conduct of private actors, but 'may be responsible for the effects of the conduct of private parties, if it failed to take necessary measures to prevent those effects'. 108 The obligation to promote and protect human rights is one of due diligence, which 'requires States to take measures designed to ensure that individuals within their jurisdiction are not subjected to' human rights violations, and 'to take reasonable steps to avoid a risk of ill-treatment' by third parties. 109 The obligations of states to ensure respect for human rights by private actors is particularly relevant in the context of military AI, as one cannot disregard the major role of private companies in developing and selling AI susceptible to leading to international law violations. 110 It is also important to take into account that many AI technologies with potential military applications are of a dual-use nature, 111 so that efforts to protect human rights with regard to military AI must also address companies not directly involved in defence and security applications. 112 One of the main tools for states to ensure that private actors respect human rights is through domestic regulation. 113 When it comes to new technologies such as AI, states must therefore put in place new regulations, if and when necessary, to ensure that technologies developed by private actors do not lead to human rights infringements. A recent report on 'Responsibility and AI' from the Council of Europe made clear that 'states are obliged under the ECHR to introduce national legislation and other policies necessary to ensure that ECHR rights are duly respected, including protection against interference by others (including tech firms)', and stated that: [the ECHR framework] offers solid foundations for imposing legally enforceable and effective mechanisms to ensure accountability for human rights violations, well beyond those that the contemporary rhetoric of "AI ethics" in the form of voluntary self-regulation by the tech industry can realistically be expected to deliver. 114 106 In the same vein, the 2018 Toronto Declaration, whose signatories include Amnesty International and Human Rights Watch, provides that: 'States should put in place regulation compliant with human rights law for oversight of the use of machine learning by the private sector in contexts that present risk of discriminatory or other rights-harming outcomes.' 115 In practice, states must develop clear legal standards for the private sector, which translate general human rights duties to the context of AI, and apply at the stage of design and development. 116 In order to operationalize AI regulation in relation to human rights and international law violations, technical standards and processes will likely be needed. For instance, testing and monitoring schemes could allow the screening and certification of AI systems developed by private actors.
In view of the transnational nature of many companies involved in developing military AI, there is a risk of avoidance and buck-passing between states claiming to not be able to regulate such companies. However, regulatory obligations stemming from human rights law are limited to actors within the jurisdiction of the state (for instance, based in the territory of that state), and subject to a standard of reasonable diligence. The transnational nature of technology companies is then not an excuse to avoid regulating their conduct. In order to more effectively implement regulatory obligations in the technology sector, co-ordinated efforts of states at the EU or UN level might nonetheless be useful.

Concluding remarks
This contribution has provided a comprehensive overview of the situations in which state responsibility in relation to military AI can arise. State responsibility can be engaged for the wrongful use of AI-enabled technologies in the battlefield, negligent development or procurement of AI technologies (including in case of failure to integrate international norms in AI design and development), and failure to ensure respect for international law by other actors developing or deploying AI. At the stage of deployment, the article analysed the human dimension of attribution of conduct, arguing that, due to the characteristics of AI, the actions and omissions of direct human operators do not always provide ground for attribution. Nonetheless, attribution of conduct involving AI systems can be grounded in human conduct and human decision-making by other organs and agents (i.e., developers, political and military decision-makers). At the stage of development, it argued that existing obligations prescribe a duty to ensure compliance-by-design, that is, an obligation to seek to integrate applicable norms in the design of AI systems. This obligation also comes into play at the stage of procurement, where states must verify compliance. Regarding derived responsibility, the article discussed some implications of using AI for responsibility in multinational military operations, and analysed state obligations to ensure respect for international law in terms of the regulation of private actors.
The article demonstrated that, overall, the framework of state responsibility appears amenable to the specific challenges posed by AI technologies. Further, it argued that state responsibility has a useful role to play for AI regulation and accountability. In order to bridge potential responsibility gaps in relation to military AI, state responsibility under international law has a complementary function next to other responsibility frameworks which, together, have the potential to comprehensively ensure accountability at all levels. Seeking to hold states accountable in relation to AI further presents unique advantages in view of the primary role of states with regard to the 115 Toronto Declaration on Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems (2018), para. 40. 116 Comparably, the US issued a set of guidelines for private contractors, which provides detailed guidance on how to ensure that ethical principles are integrated and implemented in the development of AI systems. US Defense Innovation Unit (DIU), 'Responsible AI Guidelines in Practice', available at www.diu.mil/responsible-ai-guidelines; W. D. Heaven, 'The Department of Defense is Issuing AI Ethics Guidelines for Tech Contractors', (2021) MIT Technology Review, available at www. technologyreview.com/2021/11/16/1040190/department-of-defense-government-ai-ethics-military-project-maven/. deployment of military technologies, and the regulation of private actors. In particular, state responsibility proves to be a useful doctrine in order to ensure that military AI is developed and adopted in full compliance of applicable international norms. By building on approaches found in the field of ethics of technology, such as value-sensitive design and human-cantered innovation, the article also contributed to bridging ethical, technical, and legal approaches to AI.
In order to operationalize the framework of responsibility outlined in this article, further interdisciplinary research is however needed on engineering methods that would strengthen the capacity of states and corporations to ensure international legal compliance in the design and deployment of AI systems, and on governance approaches and policy options in relation to military AI. This will allow for the translation of legal principles into military and policy guidance as well as technical standards.