Introduction
Artificial intelligence (AI) and autonomous systems are increasingly playing a crucial role in warfare.Footnote 1 This is evident in two ongoing armed conflicts. The international armed conflict between Russia and Ukraine has arguably been the first armed conflict in which both sides have actively relied on AI to gather information and enable weapons systems with different degrees of autonomy.Footnote 2 In Gaza, Israel has reportedly been relying on AI and autonomy to produce target recommendations, as in the case of systems such as the Gospel or Lavender.Footnote 3 AI and autonomy, however, do not only find application in the traditional military operational domains, but also in cyberspace.Footnote 4 As observed by the United Nations (UN) Secretary-General, António Guterres, in 2023, “AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering”.Footnote 5
This article focuses on the use of “autonomous cyber capabilities” (ACC) – defined for the present purposes as software agents that are programmed to carry out specific tasks through cyberspace without real-time human control or oversight – in armed conflicts. It will rely solely on publicly available and unclassified information, and therefore, certain assumptions are made. Although to date there is no public report of ACC being used in an armed conflict, States are increasingly researching and developing ACC to carry out both offensive and defensive cyber operations,Footnote 6 and it is therefore expected that ACC will eventually be deployed in such contexts. The prospect of employing such technologies in warfare would indeed be particularly appealing for States: not only might ACC contribute to further strengthening networks’ robustness, resilience and response against malicious cyber operations, but they would also provide strong tactical and strategic advantages in the conduct of hostilities.Footnote 7 At the same time, however, the employment of ACC in armed conflict raises important concerns, especially in relation to the absence of real-time human control and to ACC’s potential over-reliance on AI.Footnote 8 The lack of real-time human control undermines the exercise of context-specific judgements, which are at the core of several international humanitarian law (IHL) provisions,Footnote 9 and AI often brings with it a certain degree of unpredictability, unreliability and unexplainability.Footnote 10 While AI is not a prerequisite for the functioning of ACC per se, it may further enable such systems’ adaptability and learning capacity;Footnote 11 thus, States will likely rely on AI to increase ACC’s flexibility. Where AI-enabled ACC exhibit similar characteristics of unpredictability, unreliability and unexplainability, this would raise significant risk of unintended violations of IHL.Footnote 12
On the basis of these assumptions, this article explores whether due diligence offers a valuable framework to mitigate the risk of unintended violations of IHL resulting from the potential use of ACC in armed conflict. Originally emerging in nineteenth- and twentieth-century State practice and arbitral and judicial decisions in relation to the law of neutrality and the protection of aliens and their property, due diligence has often been invoked to establish the legal responsibility of a State in connection with the behaviour of private individuals that cannot be directly attributed to the State.Footnote 13 Nowadays, due diligence is best understood as “a duty or standard of care”Footnote 14 that should be applied to a State’s actions (or activities subject to its jurisdiction or control) which harm other actors or the public interest. In other words, it is an obligation of conduct, which requires States to make every effort to obtain a specific result and manage a certain risk.Footnote 15 It finds application in several regimes of international law, including IHL,Footnote 16 and has proved to be particularly relevant also with respect to new technologies, including cyber operationsFootnote 17 and AI.Footnote 18
This article will rely on due diligence to explore whether and to what extent States have the obligation under IHL to take all appropriate measures to mitigate the risks associated with the use of ACC in armed conflict, and which diligent measures States shall undertake in this regard. It will first provide a working definition of ACC and discuss their potential use in armed conflict, together with the main risks they entail. Although these aspects alone would merit an in-depth analysis, this article addresses them only briefly, in order to focus on the relevant States’ due diligence obligations under IHL. Notably, it will consider the obligations of conduct that emerge from the duties to respect and ensure respect for IHL, to conduct legal review of new weapons, means and methods of warfare, and to undertake all feasible precautions both in attack and against the effects of attack, and will discuss their applicability to ACC. In the absence of relevant publicly reported State practice on the matter, examples of diligent measures that States shall adopt when researching, developing, programming and deploying ACC will be provided.
The use of ACC in armed conflict and the associated risk of unintended violations of IHL
Defining ACC and their emerging role in armed conflict
Notwithstanding the increasing interest in using AI applications in the cyber domain to render cyber operations autonomous and adaptable, there is still no shared definition of “autonomous cyber capabilities”. For the purpose of this article, the term “ACC” refers to software agents that are programmed to carry out specific tasks through cyberspace without real-time human control or oversight.Footnote 19 In other words, ACC can be used to conduct cyber operations that are designed to operate through cyberspace in pursuit of a predetermined goal without requiring real-time human intervention by a human operator.Footnote 20 They may follow a set of pre-written instructions (in the form of “if <x happens> then <do A> else <do B>”),Footnote 21 or may rely on sophisticated AI techniques such as machine learning.Footnote 22 While the former are programmed to operate in a predictable and fully observable environment where all possible events are known in advance,Footnote 23 the latter are able to adapt to unexpected changes in the operating environment – which is particularly relevant in the context of armed conflict and cyberspace.Footnote 24
To date there have been no public reports of ACC being used in armed conflict, yet, as underlined by the International Committee of the Red Cross (ICRC), “these technologies are expected to change the nature of both capabilities to defend against cyber attacks and capabilities to attack”.Footnote 25 The prospect of using them in the conduct of hostilities to carry out both defensive and offensive autonomous cyber attacks is indeed particularly appealing for States.Footnote 26 On the one hand, ACC may be used to passively and/or actively defend a network or a computer system from malicious cyber operations without real-time human control, strengthening the network’s robustness, resilience and response;Footnote 27 for example, an ACC may be programmed to detect malicious cyber threats and autonomously execute active defensive countermeasures against the adversary’s system.Footnote 28 Such systems are already being researched and developed: Mayhem, the Cyber Reasoning System that won the 2016 US Defense Advanced Research Projects Agency Cyber Grand Challenge, was able to autonomously identify and patch vulnerabilities in flawed code, as well as to detect and exploit adversary vulnerabilities.Footnote 29 These defensive systems could be designed to hack back and incapacitate (or even destroy) the adversary’s system from which the malicious cyber operation originates.Footnote 30
On the other hand, ACC may also be designed to launch particularly sophisticated cyber attacks without human intervention.Footnote 31 One of the first known offensive ACC used was reportedly Stuxnet, the malware that targeted Iran’s Natanz nuclear enrichment facility in 2009.Footnote 32 It was programmed to autonomously spread throughout Natanz’s network, identify the software controlling the facility’s centrifuges and manipulate their operation by altering their settings.Footnote 33 While Stuxnet did not occur during an armed conflict and experts are divided on whether it would have triggered the applicability of IHL,Footnote 34 it provides a good example of ACC causing physical damage.Footnote 35 Another recent example of offensive ACC is DeepLocker, a malware developed by IBM Research that relies on AI techniques to identify pre-selected targets on the basis of specific trigger conditions (e.g., facial and voice recognition, geolocation) and autonomously launch a cyber operation without real-time human control.Footnote 36 Similar systems could be used in the conduct of hostilities to execute cyber attacks against military objectives, such as command and control facilities or air defence systems, or against individuals who qualify as lawful targets at the time of the attack, such as military commanders.
In light of the tactical and strategic advantages connected to ACC, therefore, there is little doubt that these technologies will be used in future armed conflicts.Footnote 37 Consequently, it is of crucial importance to verify whether and to what extent ACC can be developed and used, in all or some circumstances, in compliance with IHL provisions.Footnote 38 Not only must they be programmed in such a way that they are not indiscriminateFootnote 39 and do not cause superfluous injuries and unnecessary suffering,Footnote 40 but they must also be used in compliance with the principles of distinction,Footnote 41 proportionalityFootnote 42 and precaution.Footnote 43
ACC’s technical features undermining IHL compliance
ACC present some technical features that raise important concerns vis-à-vis IHL and that warrant brief consideration.Footnote 44 Firstly, as noted above, ACC’s lack of real-time human control or oversight might undermine their use in compliance with several IHL provisions. Indeed, as pointed out by Capone,
many key IHL rules require the application of evaluative decisions and value judgements, taken on the basis of the “reasonable commander standard”, such as the presumption of civilian status in case of “doubt”, the selection of the precautionary measures to implement during an attack, or the assessment of proportionality, which requires the attacker to determine what constitutes “excessive” collateral damage in relation to anticipated concrete and direct military advantage.Footnote 45
At the current state of technological development, ACC seem unable to perform similar subjective evaluations without human intervention.Footnote 46 One way around this problem might be to carry out these assessments ex ante, programming ACC accordingly to carry out specific attacks.Footnote 47 ACC may, for instance, be instructed to target a military objective (or a list of military objectives) that has already been validated by the military commander before engagement. Likewise, the proportionality assessment could be carried out by the military commander immediately before the attack on the basis of the information available at the time and programmed into the ACC.Footnote 48 Stuxnet, for example, was programmed to target a specific piece of software (Siemens Step 7) and would not activate unless such software was found on the infected computer. In addition, it had in place several safety measures aimed at reducing any collateral damage, including a self-destruct mechanism.Footnote 49 Had the attack occurred in the context of an armed conflict, at least based on the information currently available in the public domain, Stuxnet would probably have adhered to the principles of distinction and proportionality.Footnote 50 Yet, as will be discussed further below, even in pre-planned attacks, real-time human control may still be necessary to ensure compliance with IHL, especially when ACC are used in complex and dynamic environments where ex ante evaluations may no longer be valid.Footnote 51
Secondly, again as noted above, ACC that rely on AI techniques to adapt to changes in the operating environment risk being highly unpredictable, unreliable and unexplainable.Footnote 52 As mentioned, although AI is not necessarily a prerequisite for the functioning of ACC, it may further enable their adaptability and their ability to learn.Footnote 53 AI-enabled ACC may indeed be programmed to learn from the environment in which they are operating and to make independent decisions or adjust their performance to the circumstances of the case, to the extent that they have been described as “independent in their functioning”.Footnote 54 This characteristic confers on them high flexibility, allowing them to operate even in hostile and dynamic environments.Footnote 55 At the same time, however, AI-enabled systems have often been described as unpredictable and unreliable, especially if they rely on machine learning,Footnote 56 and it is indeed difficult to predict what these systems will learn from the operating environment and therefore how they will respond to a given input.Footnote 57 Complicating matters further, AI-enabled systems often produce outputs that are not explainable (the “black box” phenomenon), making it difficult (if not impossible) for their users to understand how and why the system reaches that specific output from a given input.Footnote 58 Where AI-enabled ACC exhibit similar behaviour, this would raise important concerns as to their use in compliance with IHL. In addition, the fact that ACC operate through the cyber domain, where most infrastructures are interconnected and dual-use,Footnote 59 may further exacerbate such risks. Were ACC to be programmed to self-replicate and self-propagate in order to reach their target more effectively, they could spread uncontrollably, potentially disrupting, disabling or physically damaging essential civilian services and infrastructures. This could increase the risk of indiscriminate attacks and could even lead to an escalation of the conflict.Footnote 60
From this brief overview, it is clear that the use of ACC may result in unintended violations of IHL, posing significant risks of civilian harm. The next section will therefore explore whether and to what extent States have due diligence obligations under IHL to address and mitigate such risks.
Due diligence as a possible framework to mitigate ACC’s associated risks
The notion of due diligence first developed in the nineteenth century through State practice and arbitral decisions in relation to States’ neutralityFootnote 61 and the protection of aliens and their property.Footnote 62 In the twentieth century, international judicial decisions further extended the concept of due diligence, grounding it in the obligation of States not to allow their territory to be used for acts contrary to the rights of other States.Footnote 63 Nowadays, due diligence is best understood as a duty or standard of care that should be applied to assess States’ compliance with obligations of conduct.Footnote 64 It finds application in several regimes of international law, including IHL, and is generally invoked to manage risks so as to prevent violations of other obligations (often referred to as obligations of result)Footnote 65 that might harm or damage other actors or the public interest.Footnote 66 Due diligence requires States to “deploy adequate means, to exercise best possible efforts, to do the utmost, to obtain [a certain] result”Footnote 67 – yet, it does not prescribe how this result must be obtained, leaving it up to States to decide which diligent measures shall be adopted, on the basis of their available means. This flexibility allows States to adjust their diligent measures based on an assessment of the risk they need to address.Footnote 68
In situations of armed conflict, where the risks inherent to the conduct of hostilities are high, due diligence has proved to be particularly useful. Due diligence is indeed a chapeau obligation for several IHL norms of both a treaty and customary nature.Footnote 69 For example, the overarching duty to ensure the respect of IHL in all circumstances,Footnote 70 along with other related norms (such as disseminating IHL,Footnote 71 repressing and suppressing breaches of IHLFootnote 72 and conducting legal review of all new weapons, means and methods of warfareFootnote 73), entails the exercise of due diligence in times of peace and war. Due diligence is also reflected in numerous norms pertaining to the conduct of hostilities, such as the duty to take all feasible precautions in attackFootnote 74 and against the effects of attack.Footnote 75
The following analysis will focus, in particular, on those IHL obligations that envisage due diligence and are expected to be most relevant for mitigating the risks associated with ACC: the duties to respect and ensure respect for IHL, to conduct legal review of new weapons, means and methods of warfare, and to undertake all feasible precautions both in attack and against the effects of attack. After providing a brief overview of such provisions and discussing their application to ACC, the article will offer examples of diligent measures that States shall undertake to manage the risks associated with ACC.
The duty to respect and ensure the respect for IHL
The first provision that warrants close examination is the duty “to respect and to ensure respect for” IHL both in times of peace and in times of war, enshrined in Article 1 common to the four Geneva Conventions (common Article 1) and Article 1(2) of Additional Protocol I (AP I). This is one of the broadest and most important positive IHL obligations of conductFootnote 76 and is widely considered as reflective of customary international law.Footnote 77
Significantly, the first prong of the obligation (the obligation “to respect”) is a reaffirmation of the pacta sunt servanda formula.Footnote 78 It requires States to refrain from committing or encouraging IHL violations,Footnote 79 and to take positive steps to ensure that armed forces, as well as organs and individuals whose acts are attributable to the State, act in compliance with IHL in all circumstances.Footnote 80 It envisages obligations of result rather than of conduct.Footnote 81 That said, the duty to respect IHL is closely related to other autonomous IHL obligations of conduct, such as the duty to disseminate IHL and train armed forces.Footnote 82
The second prong of the obligation (the duty “to ensure respect”) covers conducts that are not attributable to the State under international law – namely, those of the whole population over which a State exercises authority (internal application),Footnote 83 and those of third States (external application).Footnote 84 This is where due diligence comes into play under common Article 1, as States are required to take all feasible measures necessary to prevent and repress IHL breaches from actors whose conduct is not attributable to the State. The nature of such measures depends on the specific circumstances of the case, including the gravity of the breach, the imminence of further violations and the available means.Footnote 85 With respect to private persons, such measures include, for example, the dissemination of IHL among the civilian population and the repression and suppression of IHL breaches at the domestic level.Footnote 86 With respect to third States, such measures may entail, inter alia, addressing questions of compliance within the context of diplomatic dialogue (including exerting diplomatic pressure to bring violations to an end); intervening (also directly) to prevent or bring to an end an IHL violation by a coalition partner; offering legal assistance to the parties to a conflict (such as instructions or training); conditioning, limiting or refusing direct or indirect arms transfers and other forms of support; or referring a situation to the International Humanitarian Fact-Finding Commission.Footnote 87
The duty to respect and ensure respect for IHL is relevant to new technologies, including cyber operationsFootnote 88 and AI-enabled technologiesFootnote 89 – and the same applies to ACC. Accordingly, States must refrain from committing violations of IHL or encouraging others to breach IHL by means of ACC, and must also take all feasible measures to ensure that ACC are developed and used in compliance with IHL by their armed forces, as well as by private persons (including private technology companies and civilian hackers within their jurisdiction)Footnote 90 and third States.Footnote 91 These measures must be adopted both in times of peace and in times of war, and shall concern the entire life cycle of ACC, from their development or procurement to their deployment. It follows that States shall ensure that ACC are developed in a way that ensures their use in compliance with IHL and must also refrain from developing, procuring or adopting technologies that cannot comply with IHL.Footnote 92 In addition, States shall also educate and train military commanders and human operators who programme and deploy ACC in armed conflict,Footnote 93 and must make specialized legal and technical advisers available to their armed forces when developing, programming or deploying ACC.Footnote 94
With respect to private individuals, States have at their disposal various diligent measures to ensure that the civilian population over which they exercise authority – including both private technology companies developing ACC and civilian hackers deploying ACC – respects IHL. Firstly, in line with the obligation to disseminate IHL, States should, as necessary and feasible, ensure that employees of private technology companies, as well as civilian hackers, are made aware of the relevant IHL rules and require them to comply with the applicable legal obligations.Footnote 95 Depending on the degree of their influence over private technology companies, States may also implement ad hoc regulations to prevent the development or transfer of ACC that cannot be used in compliance with IHL.Footnote 96 Secondly, States must take all necessary measures to repress and suppress any breaches of IHL committed by individuals through ACC, including the establishment of effective penal sanctions at the domestic level and the obligation to search for alleged perpetrators and to prosecute them before national courts or extradite them to another State.Footnote 97
Finally, with respect to third States to which a State has supplied ACC, the latter State shall monitor how these systems are used by the third StateFootnote 98 and, where they are used in violation of IHL, shall condition, limit or cease any direct or indirect transfer of ACC, as well as any other form of support (including financing, training, sharing of information and embedding of military advisers), and adopt risk mitigation measures.Footnote 99 States should also condemn breaches of IHL by other States deploying ACC in armed conflict – for instance, by taking public positions or through diplomatic or adjudicative means – and demand for their cessation.Footnote 100
The duty to conduct legal review of new weapons, means and methods of warfare
A second important due diligence obligation under IHL concerns determining whether the employment of a new weapon, means or method of warfare would be prohibited by IHL in some or all circumstances.Footnote 101 For States Parties to AP I, this obligation is clearly established: in accordance with Article 36, when studying, developing, acquiring or adopting new weapons, means or methods of warfare, States Parties must determine whether their employment would, in some or all circumstances, be prohibited by international law.Footnote 102 The purpose of this duty is to prevent the use of new weapons that would violate IHL in all circumstances and to impose restrictions on the use of new weapons that would violate IHL in some circumstances, by determining their lawfulness before they are developed, acquired or included in a State’s arsenal.Footnote 103 It also functions as an important “safety net” vis-à-vis the rapid developments that take place in military technologies,Footnote 104 applying to all new weapons, means and methods of warfare as well as to existing weapons that a State intends to use for the first time and to modifications that alter the function of a weapon that has already been reviewed.Footnote 105
For States that have not ratified AP I, however, the scope of this obligation is less clear. Although it remains debated whether Article 36 is reflective of customary international law, it has been suggested that States are still obliged to determine whether their weapons, means and methods of warfare can be employed in compliance with IHL.Footnote 106 According to the ICRC in particular, the fact that all States are under an obligation to ensure respect for IHL (as mentioned above) also includes an obligation to take all feasible measures to prevent the use of weapons, means and methods that would violate IHL under all circumstances, and to restrict the use of those that would do so under certain circumstances.Footnote 107
ACC fall within the material scope of Article 36 when they qualify as new weapons. While there is no agreed definition of “cyber weapons”, this article considers those ACC that are designed, used or intended to be used to conduct an attack within the meaning of Article 49 of AP I as falling within this category. Thus, any ACC that cause, or are reasonably expected to cause, a certain degree of violence against the adversary, whether in offence or defence, qualify as weapons for the purpose of Article 36.Footnote 108 Although only a few States have publicly undertaken legal reviews of cyber capabilities so far, the fulfilment of this due diligence obligation is crucial to determining whether ACC can be lawfully used in the conduct of hostilities.Footnote 109 ACC may take different forms depending on the way they are programmed, the tasks they need to perform and the environment in which they operate, so conducting a legal review on each individual ACC would allow States to determine whether they are unlawful per se – i.e., whether they cause unnecessary suffering and superfluous injury or are indiscriminate in nature – or whether they can be deployed in compliance with IHL under certain circumstances. Significantly, while States are not expected to foresee all possible (mis)uses of ACC in the conduct of hostilities, they are required by Article 36 to assess their legality in normal and expected uses.
The legal review of ACC is nevertheless likely to be particularly challenging in light of the autonomous nature and cyber dimension of these technologies.Footnote 110 Where AI-enabled ACC exhibit levels of unpredictability, unreliability and unexplainability, the legal assessment is necessarily more demanding;Footnote 111 as stressed by the ICRC with specific regard to autonomous weapons systems (AWS), the legal review requires that the weapon system be sufficiently understandable, predictable and explainable, so as to allow reviewers to anticipate its operation in its normal or expected circumstances and manner of use.Footnote 112 This complexity is further compounded by the dynamic character of the software, which may be subject to continuous updates, especially in the case of self-learning or self-adaptive systems.Footnote 113 To address these challenges, States will need to adopt a more tailored approach to legal reviews – one that takes into account the specific characteristics of ACC, as well as its training, testing and performance in normal and expected uses.Footnote 114 In this respect, while States are free to decide how to fulfil their weapon review obligation, Tattersall and Copeland have developed an approach that provides useful guidance on the matter. By building upon the ICRC’s Guide to the Legal Review of New Weapons, Means and Methods of Warfare, their approach comprises nine steps.Footnote 115 First, it is necessary to assess the design, technical and performance characteristics of the ACC, including the code or algorithm specification (predictability, reliability, explainability etc.), the adaptability of the software agent, the existence of digital and procedural safeguards, the modes and levels of autonomy, and the interactions with the environment and with human operators. It is indeed necessary to understand how a specific ACC works from a technical standpoint in order to assess its legal implications. Then, the reviewers must determine the “normal or expected use” of the examined ACC, as well as if it amounts to a “new weapon, means or method” that requires review under Article 36 of AP I. Once it has been confirmed that the specific ACC is indeed a new weapon of warfare that needs to be legally reviewed, it is imperative to assess whether its use can be in compliance with both specific and general prohibitions or restrictions on weapons, means and methods of warfare (under both treaty and customary law). In the unlikely case that the ACC’s normal or expected use is not covered by existing rules of IHL, it must be assessed vis-à-vis the principles of humanity and the dictates of the public conscience (the Martens Clause). Interestingly, Tattersall and Copeland also consider in their analysis other relevant international law provisions (such as the law of neutrality and international human rights law), as well as domestic law. The last step of the review consists in the adoption of the review’s conclusion and of the associated legal advice decision.
Scholars have suggested other considerations with respect to AI-enabled technologies that are relevant for ACC which rely on AI. Lewis, for instance, has elaborated a list of sixteen elements or properties of interest or concern that should be considered in the legal review of weapons, means and methods of warfare involving AI-related techniques or tools: these are (i) legal agency, (ii) attributability, (iii) explainability, (iv) reconstructability, (v) proxies, (vi) human intent and human knowledge, (vii) normative inversion, (viii) value decisions and normative judgements, (ix) ongoing monitoring, (x) deactivation and/or additional review, (xi) critical safety features, (xii) improvisation, (xiii) representation, (xiv) biases, (xv) dependencies, and (xvi) predictive maintenance.Footnote 116 Other scholars, like Coco and Dias, have stressed the importance of involving in the legal review impartial experts in international law, technology and other relevant areas.Footnote 117 Taken together, all of these elements can support States in adopting a tailored approach to the legal review of ACC, suited to the ACC’s specific technical and operational characteristics.
The duty to take all feasible precautions in attack
A third relevant IHL due diligence obligation pertains to the conduct of hostilities. Due diligence plays a central role in all the principles regulating the conduct of hostilities, and this is particularly evident in the rules on precautions in attack under Article 57 of AP I.Footnote 118 According to Article 57(1) of AP I, “[i]n the conduct of military operations, constant care shall be taken to spare the population, civilians and civilian objects”.Footnote 119 In other words, this provision recognizes that there is a general and flexible duty of care that must be exercised during military operations – i.e., during “any movements, manoeuvres or other activities whatsoever carried out by the armed forces with a view to combat”Footnote 120 – to protect the civilian population, civilians and civilian objects. This is, as suggested by Lubin, “an expansive definition that captures all military activities with a general nexus to combat”.Footnote 121 While it is the military commander who decides which measures fall within this obligation, it has been pointed out that “the higher the risk for the civilian population in any given military operation, the more will be required in terms of care”.Footnote 122
Article 57(2) complements this first “umbrella obligation” of care with additional specific duties aimed at limiting the negative effects of an attack upon civilians and civilian objects.Footnote 123 Notably, those who plan or decide upon an attack must (i) do everything feasible to verify ex ante that the objectives to be attacked are legitimate; (ii) take all feasible precautions in the choice of means and methods of attack in order to avoid, or at least minimize, incidental loss of civilian life, injury to civilians or damage to civilian objects; and (iii) refrain from launching an attack that is expected to be disproportionate.Footnote 124 If it becomes evident that the attack is in violation of the principles of distinction or proportionality, then the attack shall be cancelled or suspended.Footnote 125 The key aspect of these specific obligations of conduct is that they must be feasible – i.e., they must be “practicable or practically possible taking into account all circumstances ruling at the time, including humanitarian and military considerations”.Footnote 126
The principle of precautions in attack plays a pivotal role in ensuring that ACC are used in compliance with IHL during military operations in general, and during an attack in particular. Firstly, it requires States to exercise constant care to spare the civilian population and civilian objects in the conduct of military operations. It implies a legal obligation for military commanders to consider any adverse impacts that autonomous cyber operations (including, but not limited to, attacks) may have on the civilian population and to take measures to avoid them. Harrison Dinnis, for example, considers the case of an ACC programmed to poison the dataset on which an adversary’s own AI system is trained. Where this operation results in that system becoming indiscriminate or disproportionate, or otherwise violating any other rules of IHL, it would be impermissible.Footnote 127 While the exact application of this principle in a specific military operation must be left to the military commander, the International Group of Experts working on the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Tallinn Manual 2.0) considered that this obligation requires situational awareness at all times throughout all phases of military operations.Footnote 128
It follows that military commanders need to collect all reasonably available intelligence in order to fully understand the impact of military operations.Footnote 129 When preparing an attack, accurate intelligence-gathering is indeed necessary to give ACC precise instructions and to feed the algorithm with more reliable data.Footnote 130 In this sense, it may be necessary to map “the enemy’s network with sufficient accuracy through network mapping, footprinting, and other cyber exploitation operations”.Footnote 131 Likewise, it may also be necessary to map civilian networks and infrastructures, thereby gathering information about the civilian environment so that the full impact of military operations can be anticipated and mitigated as much as possible.Footnote 132 Although it may in principle be possible to partially conduct intelligence-gathering by means of ACC (e.g., by instructing ACC to detect vulnerabilities in the target’s network),Footnote 133 human operators must, where possible, complement technical information with additional intelligence acquired outside cyberspace;Footnote 134 they must take informed decisions on the basis of their expertise, without falling into the trap of “automation bias”Footnote 135 or “overload of information”, which “may trigger a problem of ‘seeing too much’, which is as grave as ‘seeing to little’”.Footnote 136 To this end, military commanders shall have the necessary technical expertise – or at least have technical experts available – to understand the acquired intelligenceFootnote 137 and determine whether appropriate precautionary measures have been taken.Footnote 138 Finally, the duty of care also appears to require that military commanders maintain oversight of ACC, as well as exercising human control over them during deployment when there is a risk of unintended harmful consequences for the civilian population, as will be discussed further below.Footnote 139
Secondly, this obligation of conduct requires those who plan or decide upon an autonomous cyber attack to adopt all feasible measures to ensure that the deployed ACC (i) correctly identify the target and (ii) direct the attack against the selected target while avoiding or in any event minimizing incidental collateral damage. The key issue is to determine what constitutes “feasible precautions” in the context of autonomous cyber attacks, a determination that depends on several factors, including the circumstances of the attack and of the operating environment. For example, embedding operational constraints in the software agent may contribute to mitigating its potential harm and to avoiding, to the maximum extent possible, unintended violations of IHL.Footnote 140 For the purpose of this analysis, “operational constraints” refer to technical or functional limitations embedded ex ante in the system’s design or parameters of use so as to restrict the scope, duration or effect of an ACC’s behaviour, thereby ensuring greater predictability and control over the attack.Footnote 141 Such constraints can take various forms and must be adopted on an ad hoc basis according to the specific operational needs of the mission and the technical characteristics of the ACC. In this regard, Perez contends that some of the operational constraints suggested in the debate on AWS may be easily extended to ACC. These measures relate inter alia to the tasks assigned to the system, controls on the environment (e.g., target profiles), parameters of use (e.g., deactivation, fail-safe mechanisms) and means of interaction (e.g., abortion of the task, limits on self-learning capacities).Footnote 142
Further operational constraints may be embedded in ACC that rely on AI. For instance, Coco and Dias suggest incorporating the following constraints into AI-enabled systems:
[D]evise a deactivation threshold triggered by the lapse of a certain amount of time or by the degradation of a critical safety feature; build in ‘fail-safe’ mechanisms to allow human operators to safely take over or override the system; train the machine to prioritize specific data that is deemed to be particularly reliable; set minimum levels of accuracy before any decision is made and acted upon.Footnote 143
Similar constraints were embedded in Stuxnet – not only was it programmed to target the specific software used at the Natanz uranium enrichment facility, but it also activated only in the presence of that software (without compromising other computers) and deactivated once the expiration date (24 June 2012) was reached.Footnote 144 Although Stuxnet did not occur in the context of an armed conflict and therefore IHL did not apply, based on publicly available information, these technical constraints would probably have been consistent with what would have been required by the principle of precautions in attack.
Operational constraints alone, however, might not be sufficient to mitigate ACC’s risks and ensure compliance with IHL. While this article does not categorically exclude the possibility that ACC could be used in compliance with IHL even in the absence of real-time human control, relying solely on stringent operational constraints, it recognizes that this would be confined to a very narrow set of circumstances. Consider, for example, an autonomous cyber attack against a predetermined military target that is easily identifiable through technical data and completely disconnected from civilian infrastructure, carried out in a static and predictable environment in which the presence of civilians or civilian objects, as well as of any adverse effects on them, can be excluded with certainty in advance. Such an autonomous cyber attack could, in principle, comply with IHL without real-time human control, provided that the ACC is not unpredictable, unreliable or unexplainable; that the military target has been validated prior to engagement; that any expected collateral damage does not exceed the anticipated military advantage; and that the ACC is programmed to suspend or abort the attack in case of doubt. According to some authors, Stuxnet would have fallen within this category had it occurred in the context of an armed conflict.Footnote 145 In most cases, however, such scenarios are unlikely to materialize, given that most current cyber infrastructures can serve both civilian and military purposes. In addition, the increasing reliance on AI might negatively affect the predictability, reliability and explainability of ACC. As a general rule, therefore, a certain degree of human control shall be retained over ACC during engagement.Footnote 146
While it is acknowledged that the intrinsic characteristics of cyberspace may pose several challenges to the exercise of real-time human control over ACC during an attack due to the very high speed at which cyber operations occur and the significant volume of data involved,Footnote 147 it is nonetheless contended that human control remains essential. Indeed, under most circumstances, such control may still constitute the only available precautionary measure that a State must undertake to ensure the use of ACC in compliance with IHL provisions. Starting from this assumption, the crucial point is to determine when real-time human control qualifies as an effective measure for reducing the risk of harm; only under these circumstances can it be considered a measure of due diligence.Footnote 148 In this respect, it may be useful to briefly examine the notion of human control elaborated within the framework of the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS).Footnote 149
The notion of human control has played a pivotal role in the debate over AWS. In an effort to articulate the degree of human involvement that should be preserved, a range of formulations have been proposed, including “sufficient”, “appropriate” and “meaningful” human control. Among these, the notion of “meaningful human control” (MHC) has gained significant traction. Introduced by the non-governmental organization Article 36 in the first CCW Expert Meeting on LAWS, MHC was initially conceived as a “one-size-fits-all” model of human control that must be exercised at all times over any individual attack carried out by means of AWS.Footnote 150 This interpretation of MHC, however, soon proved to be conceptually contested and difficult to operationalize, as it failed to adequately account for the diversity of technological capabilities, operational environments and levels of autonomy involved.Footnote 151 As the debate developed, the focus gradually shifted toward the notion of “context-appropriate human judgement and control”, which has since become an established concept within the GGE LAWS discussions.Footnote 152 This shift in terminology reflects a growing recognition that the form and degree of human involvement may and should vary depending on operational context, system design and level of risk involved.
This article takes the same approach, arguing that different degrees of real-time human control may be exercised over ACC, to be determined on an ad hoc basis, by taking into consideration the characteristics and design of ACC, the circumstances of their deployment and the operational environment in which they function. With respect to the characteristics and design of these systems, AI-enabled ACC are likely to require a higher degree of human control than ACC operating on the basis of pre-imparted instructions. While ACC that follow a set of instructions in the form of “if <x happens> then <do A> else <do B>” are particularly predictable, AI-enabled ACC can adapt their behaviour by learning from experience. As a result, their conduct may become less predictable, thereby increasing the risk of unintended violations of IHL. For this reason, AI-enabled ACC require continuous human oversight, both to monitor their behaviour and to ensure timely intervention – including, where necessary, the suspension or cancellation of an attack – whenever their functioning gives rise to legal concerns. The human operator would thus serve as a “fail-safe mechanism”, ensuring that the ACC operates as intended and preventing any unintended breaches of IHL.Footnote 153 Similarly, different circumstances of deployment require different degrees of human control. In pre-planned autonomous cyber attacks against validated targets that are not expected to cause excessive collateral damage, real-time human control must be retained to verify the lawfulness of the target in case of doubt, to update the proportionality assessment as necessary and to suspend or cancel the attack if circumstances require. In autonomous cyber attacks that are not planned in advance, as in the case of defensive ACC that autonomously react to an incoming attack, real-time human control must be retained at all times to make context-specific judgements in compliance with the IHL principles of distinction and proportionality, to implement any other feasible precautions deemed necessary and to suspend or cancel any attacks that could result in a breach of IHL. Finally, with respect to the environment, ACC operating in complex and dynamic scenarios will require a higher degree of real-time human control than those operating in static and predictable environments in which human operators already have adequate situational awareness. The legal assessments must in fact be continuously adjusted to reflect the changes in the operating environment.Footnote 154
The duty to take all feasible precautions against the effects of attacks
Under IHL, States must take all feasible precautions against the effects of attacks (also known as “passive precautions”Footnote 155). According to Article 58 of AP I, notably, parties to an armed conflict are obliged to take all feasible precautions to protect the civilian population and civilian objects under their control against the effects of attacks.Footnote 156 This due diligence obligation requires parties to a conflict to take, to the maximum extent feasible, endeavours to remove civilians and civilian objects under their control from the vicinity of military objectives,Footnote 157 to avoid locating military objectives within or near densely populated areasFootnote 158 and to protect civilians and civilian objects from dangers resulting from military operations.Footnote 159
Passive precautions are particularly important in the cyber domain – where, as noted above, most of the current infrastructures can serve both civilian and military purposes – and will play an even more crucial role in the event of an autonomous cyber attack. The segregation of military from civilian cyber infrastructure, networks and data may indeed contribute to limiting an uncontrolled spread of ACC. While complete segregation might not be currently feasible, given the interconnectedness of military and civilian infrastructures, States shall still pursue this endeavour to the maximum extent possible.Footnote 160 For example, States should disconnect their military networks from the internet, not only for their own defence but also to protect civilian networks and infrastructures.Footnote 161 If an ACC is programmed to attack an air-gapped military network – that is, a network that is physically separated and isolated from other networks and systems that are not secure, such as the internetFootnote 162 – the chances that it will proliferate outside the intended network and target civilian infrastructures will be reduced.
Beside segregation, States must also adopt, to the maximum extent feasible, any other necessary precautions to protect civilians and civilian objects under their control from any dangers resulting from military cyber operations, including autonomous cyber operations.Footnote 163 Such “other necessary precautions” include precautions taken in advance of a cyber attack, such as building strong cyber resilience cultures across society by increasing public awareness of the risks associated with autonomous cyber operations;Footnote 164 engaging with the private sector, which owns most cyber infrastructures;Footnote 165 adopting cyber hygiene measures, as well as passive defensive measures (e.g., antivirus software), to enhance the protection of civilian infrastructures from intrusive autonomous cyber operations;Footnote 166 executing regular back-ups to facilitate data recovery;Footnote 167 warning about impending or ongoing autonomous cyber operations; and providing technical assistance to those targeted by means of ACC.Footnote 168
Finally, while protective emblems are not per se a form of passive precaution, actions related to their adoption might be. In light of the discussions on the possible creation of a digital marker for protected objects,Footnote 169 it is suggested that the parties to an armed conflict could, in the future, use digital emblems to signal specific categories of persons and objects that enjoy special protection under IHL.Footnote 170 Such digital emblems could be used to identify digital components (such as assets, services and data) of protected entities, signalling that under IHL, they cannot be targeted. While no digital emblem has been created so far, the ICRC is currently leading the development of technical standards on which a digital emblem could rely. They are working, together with other partners, on a prototype design known as Authentic Digital Emblem (ADEM).Footnote 171 Through ADEM, “digital emblems are machine-readable and cryptographically secured claims of protection”.Footnote 172 Thus, they could conceivably be recognizable by ACC, which could be programmed accordingly, for the purpose of sparing certain protected persons and objects.
Conclusion
Although to this date no ACC has been reported to have been used in an armed conflict, there is little doubt that ACC will play a central role in future wars. The strong tactical and strategic advantages they offer both in defence and offence will be particularly appealing for States. Yet, ACC’s lack of real-time human control and the risk of unpredictable, unreliable and unexplainable behaviours, combined with the fact that ACC operate through cyberspace, where most infrastructures are interconnected, raise important concerns of unintended violations of IHL.
This article contributes to further strengthening the protection that IHL already affords against the danger arising from the use of information and communications technologies during armed conflicts, by providing some guidance on how to prevent and avoid ACC resulting in unintended violations of IHL. In the absence of an ad hoc treaty regulating the use of ACC in armed conflict, this article contends that due diligence, being a chapeau obligation for several norms of IHL, already offers a valuable framework for mitigating the risks posed by these technologies. In particular, it analyses four IHL obligations of conduct that are relevant when it comes to ACC: the duty to respect and ensure respect for IHL, the duty to conduct legal review of new weapons, means and methods of warfare, the duty to take all feasible precautions in attacks, and the duty to take all feasible precautions against the effects of attacks. Based on these provisions, States have the positive obligation to take all appropriate means to develop and use ACC in compliance with IHL.
In the absence of publicly reported State practice on the matter, this article puts forth some possible diligent measures that States shall adopt both in times of peace and in times of war. States shall ensure that ACC are developed in compliance with IHL standards, involving both legal and technical experts in the process. Legal reviews need to include new standards that take into account the predictability, reliability and transparency of the software agent, as well as its training and performance in normal and expected uses. Individuals involved in the development, review and deployment of ACC must receive specific training in light of ACC’s technical features and legal implications. Before and during deployment, States must exercise particular care in the use of ACC. Military commanders shall have the legal and technical expertise to determine whether all feasible precautions have been taken, in order to avoid that ACC results in any excessive detrimental impact on the civilian population, civilians and civilian objects. Military commanders must perform careful intelligence-gathering, ensure that the software agents are programmed according to precise instructions, embed appropriate operational constraints and retain a certain degree of real-time human control over ACC to prevent unintended violations of IHL. States must also take all feasible precautions against the effects of autonomous cyber attacks by segregating their military networks to the maximum extent possible and adopting measures to protect the civilian population and minimize civilian harm. While additional diligent measures may emerge over time, also in light of States’ growing capabilities,Footnote 173 failure to comply with obligations of conduct triggers the international responsibility of the State.Footnote 174
In conclusion, due diligence proves to be particularly valuable in ensuring that international law keeps pace with rapid technological development.Footnote 175 This is even more evident in cyberspace, where States are more inclined to discuss non-binding norms than legally binding instruments.Footnote 176 At the same time, however, a clear and common understanding of what due diligence obligations require of States that develop and deploy ACC in armed conflicts is essential. To this end, States should include in their discussions on the military uses of AI considerations regarding which concrete diligent measures should be adopted to mitigate the human cost of ACC, and should share best practices and expertise.