Hostname: page-component-68c7f8b79f-s5tvr Total loading time: 0 Render date: 2026-01-15T05:29:22.404Z Has data issue: false hasContentIssue false

The erosion of human(e) judgement in targeting? Quantification logics, AI-enabled decision support systems and proportionality assessments in IHL

Published online by Cambridge University Press:  13 January 2026

Jessica Dorsey*
Affiliation:
Assistant Professor of International Law, Utrecht University School of Law, Utrecht, the Netherlands
*
Rights & Permissions [Opens in a new window]

Abstract

This article examines the growing use of artificial intelligence (AI)-enabled decision support systems in targeting operations and their implications for proportionality assessments under international humanitarian law (IHL). Emphasizing the primacy of the duty of constant care and precautions in attack as obligations that must be exhausted before and during proportionality assessments, the article advocates for a fuller understanding of civilian harm. It traces the historical trajectory of “quantification logics” in targeting, from the Vietnam War to contemporary AI integration, and analyzes how such systems may reshape decision spaces, cognitive processes and accountability within the context of armed conflict. Specifically, the article argues that over-reliance on computational models risks displacing the contextual, qualitative judgement essential to lawful proportionality determinations, potentially normalizing civilian harm. It concludes with recommendations to preserve the space that human reasoning occupies as central to IHL compliance in targeting operations.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of International Committee of the Red Cross.

Introduction

Within militaries, the rapid adoption of artificial intelligence (AI)-enabled decision support systems (AI-DSS) – tools that use AI techniques to gather and analyze data, provide insight into the operational environment and offer actionable recommendationsFootnote 1 – has begun reshaping decision-making, particularly within targeting operations, such as within the Joint Targeting Cycle (JTC) and similar procedures.Footnote 2 AI-DSS are intended to assist military decision-makers in evaluating factors relevant to legal compliance, including the duty to take precautions and the requirement to ensure proportionality in attacks,Footnote 3 and proponents argue that these systems can enhance efficiency by accelerating the observe–orient–decide–act loop.Footnote 4 The present author acknowledges that such potential benefits may exist in the way certain systems are designed or developed; however, this article focuses instead on the realities that these tools reveal in their use and practice. It examines the risks introduced by their underlying computational quantification logicsFootnote 5 (defined here as the translation of complex legal and ethical judgements into numerical or statistical models), particularly in relation to human cognition, accountability, and adherence to international humanitarian law (IHL).

Tensions exist between these logics and what Woodcock refers to as “choice architectures”, or the way choices are presented that influence how humans make decisions through using AI-DSS, which carry the potential to undermine the human judgement required for IHL determinations more broadly, including proportionality assessments.Footnote 6 This article offers a novel contribution to the literature by tracing the longer trajectory through which datafication, or the turn toward reliance on quantification metrics and algorithmic modalities, has come to shape military decision-making architectures, with a particular focus on proportionality assessments within the JTC. By situating AI-DSS within this broader historical and epistemic shift, the article argues that their increasing integration risks reinforcing existing quantification logics, encouraging decision-makers to translate complex elements of proportionality analysis into computational terms. This, in turn, threatens to sideline the judgement, ethical deliberation and legal reasoningFootnote 7 necessary for context-appropriate human judgement and control as well as the preservation of human responsibility and accountability.Footnote 8

Central to this argument is an analysis of how use of AI-DSS can influence a commander’s ability to reasonably or responsibly assess proportionality. If AI-DSS increasingly guide or dictate (parts of) these assessments, the human capacity for contextual judgement and reasoning may diminish through various cognitive biases and shifts, leading to decisions that may be algorithmically justified but legally non-compliant. At the speed and scale introduced by AI-DSS, this could also normalize or even increase civilian harm.Footnote 9

Technological advancements in warfare, particularly those related to datafication, automation and AI, have fuelled the belief that armed conflict can be made more precise, efficient and ethical through quantifiable metrics and algorithmic decision-making.Footnote 10 This belief reveals an embedded quantification fallacy: the mistaken assumption that the moral and legal complexities of the proverbial Clausewitzean “fog of war” can be distilled down into measurable inputs and outputs suitable for rapid computation and automated solutions.Footnote 11 Falling victim to this fallacy is particularly consequential in assessing proportionality in IHL, a rule that prohibits attacks expected to cause incidental civilian harm that would be excessive in relation to their anticipated concrete and direct military advantage.Footnote 12

This article posits that the increasing reliance on quantitative tools like AI-DSS within targeting operations risks displacing the contextual qualitative human judgement that is essential to assessing proportionality. These assessments are not reducible to a technical or mathematical formula but rather reflect a normative standard that demands subjective, context-sensitive judgement from a reasonable military commander acting in good faith. By tracing the historical trajectory of this quantification impulse from the Vietnam War to modern-day conflicts, this article demonstrates how the pursuit of precision and speed in warfare through datafied and algorithmic means risks shifting cognitive and normative frameworks, distorting legal and moral reasoning, displacing human judgement, and introducing new cognitive and accountability risks that stakeholders must be aware of and work to mitigate.Footnote 13

The article is structured in four parts. The first part situates proportionality within IHL, emphasizing the need for a comprehensive understanding of civilian harm and what this article terms the primacy of the duty of constant care and precaution, and framing proportionality solely as a final legal safeguard. It also outlines targeting operations illustrated through the lens of the JTC and locates proportionality decision-making within it. The second part traces the evolution of “quantification logics” in targeting, from the Vietnam War to the modern era and the growing integration of AI-DSS within the JTC. The third part examines the cognitive impacts that these logics, systems and approaches may have on proportionality assessments. The fourth part concludes with concrete recommendations to reaffirm the indispensability of human reasoning in proportionality assessments under IHL.

Situating proportionality assessments within IHL

The legal framework of IHL is designed to balance military necessity with humanitarian considerations that reduce human suffering during armed conflicts and protect civilians from the impact and effects of hostilities.Footnote 14 It does so through articulations of rules related to distinction and proportionality, which operate within the broader framework of the overarching duty of constant care “to spare the civilian population, civilians and civilian objects” throughout the duration of hostilities and requirements to take precautions in attack in order to avoid or minimize harm to civilians.Footnote 15 Given growing evidence that AI-DSS can reshape cognitive processes in the JTC, and recognizing that commanders must make highly consequential decisions that balance expected military advantage against potential harm to civilians and civilian objects, this article examines how the IHL rule of proportionality is operationalized within the JTC and how the quantification logics of AI-DSS may influence that operational thinking.

Codified as a prohibition on indiscriminate attacks against the civilian population in Article 51(5)(b) of Additional Protocol I (AP I) and as a set of obligations related to taking precautions in attack in Article 57(2)(a)(iii) of that same instrument, the rule of proportionality requires that “those who plan or decide upon an attack” must take measures to ensure that they do not carry out attacks “which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated”.Footnote 16

Toward a comprehensive understanding of civilian harm

As outlined above, the protection of civilians and civilian objects from the effects of hostilities lies at the centre of IHL principles and military doctrine related to distinction, precaution and proportionality.Footnote 17 Meeting these obligations throughout the JTC requires military decision-makers to understand not only what constitutes civilian harm but also its main causes.Footnote 18 In terms of legal elements of proportionality assessments, Van den Boogaard discusses the notion of “excessive” in his treatise on the subject.Footnote 19 More recently, Daniele and Sari have zoomed in on a debate about the meaning of “incidental”.Footnote 20 Gillard is one of very few examples in the literature or military manuals examining the substance of the incidental side of the assessment.Footnote 21 Her paper places those discussions within a broader, less explored understanding of this aspect of assessment, one that encompasses “loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof”, commonly referred to as civilian harm. This approach has gained operational traction among several militaries in recent years. Drawing on documented cases, civil society organizations and scholars have also urged a broader, more nuanced interpretation of this clause that extends beyond immediate casualties and understands “injury to civilians” to encompass long-term, indirect and systemic effects.Footnote 22 Militaries, too, have begun to recognize the strategic, operational and tactical value of this wider approach, seeing it as integral to the legitimacy of their operations.Footnote 23 These shifts are shaping doctrine on civilian harm mitigation and response (CHMR), a key element of civilian protection in armed conflict that applies in both asymmetric warfare, where civilians often live in or near conflict zones, and large-scale combat operations conducted in proximity to more dispersed civilian populations.Footnote 24 CHMR involves actions to “prevent, deter, pre-empt, and respond to situations where civilians are targets of violence or under threat of violence”.Footnote 25 Beyond being a legal, strategic and ethical imperative, CHMR strengthens operational legitimacy and can improve operational outcomes.Footnote 26 Many military professionals now argue that embedding CHMR into operations enhances rather than hinders mission effectiveness, with some describing it as a “fourth strategic offset” in modern warfare.Footnote 27

This article links CHMR approaches with the legal obligations of proportionality assessments in the JTC, offering an integrated perspective on both. Central to the CHMR shift, and adopted here, is a broader, more nuanced view of civilian harm and what this means in terms of legal understandings. Fully understanding civilian harm, both its forms and causes, is essential to complying with the duty of constant careFootnote 28 and applying the rules of precautions in attack and proportionality in the JTC.

Historically, targeting processes have traditionally focused on analyzing effects on adversaries, with far fewer resources dedicated to assessing effects on civilians, civilian objects and the civilian environment.Footnote 29 While militaries are beginning to acknowledge the policy value of better understanding the civilian environment, this article argues that targeting law demands this as well – for example, aligning ex ante civilian harm estimates (created from collateral damage estimation methods or a combination of intelligence sources) with ex post assessments, including battle damage assessments and after-action reports. This iterative process allows lessons from previous strikes to inform future decision-making. Harmonizing the metrics used in pre- and post-strike analysis so that the same elements of civilian harm are measured before and after an attack would significantly strengthen compliance with IHL. Indeed, the precautionary principle arguably requires such an approach.Footnote 30 This perspective is set against the backdrop of increasing use of AI-DSS, where the extension of quantification logics increasingly shapes how civilian harm is understood and potentially programmed into certain AI-DSS and, therefore, how proportionality is evaluated and operationalized in targeting decisions.

The primacy of the duty of constant care and taking precautions in attack

Building on this discussion of civilian harm, the next question is how to avoid or in any case minimize it in armed conflict. This article adopts and extends a view, shared by certain scholars and practitioners, of assessing proportionality as a final legal safeguard, one that becomes relevant for consideration only after the duty of constant care has been complied with across the conduct of hostilities, all feasible precautions in attack have been taken, and some incidental harm remains unavoidable.Footnote 31 A complementary operational perspective confirms that only after precautions have been taken can it become clear what kind of civilian harm may occur, which can then be fed into any subsequent proportionality assessment.Footnote 32 To be clear, precautions must be continually reassessed as circumstances evolve (also within proportionality assessments), so proportionality as one of the core co-equal principles alongside distinction and precaution is never a singular, isolated step. Rather, in terms of the functionality of the proportionality rule outlined in Article 51(5)(b) of AP I, this article advances the argument that a proportionality assessment becomes relevant not as a first consideration in targeting but rather only once it is evident that, despite all feasible precautions having been taken, some incidental civilian harm is still expected to occur. Corn offers a useful visual illustrating how these concepts interact in targeting decisions, as shown in Figure 1.

Figure 1. Targeting matrix, denoting the position of proportionality assessments at the bottom of the chart only after all feasible precautions have been taken. Source: Geoffrey S. Corn et al., The Law of Armed Conflict: An Operational Approach, 3rd ed., Wolters Kluwer, New York, 2006 (forthcoming).

As Figure 1 indicates, targeting is iterative and non-linear. Once targeting procedures begin, the law requires precautions to be taken first and often. However, certain interpretations of the law, shaped by the quantification logics embedded into military decision-making processes (MDMPs)Footnote 33 over the past twenty-five years, have had the (perhaps unintentional) effect ofcentral evaluative standard, often at the expense of the separate and prior obligation to take precautions in attack as well as the overarching duty of constant care to spare civilians from the effects of conflict.Footnote 34 As this author and Marta Bo have argued elsewhere, this risks contributing to a normative shift in interpreting legal obligations and the erosion of core protective mechanisms embedded in IHL and international law more broadly, diluting the emphasis on positive obligations to take measures to mitigate civilian harm in favour of manufacturing space for proactive justifications for its occurrence.Footnote 35 By leveraging quantitative modalities and the increased speed and scale they provide, integrating AI-DSS into the JTC may further exacerbate this erosion.Footnote 36

Although proportionality assessments have been described as Footnote equations 37 or calculations,Footnote 38 they are not mathematical problems to be solved.Footnote 39 Within various phases of the JTC, outlined below, military decision-makers are required to undertake complicated analyses based on vast amounts of intelligence and information in order to assess and avoid or in any case minimize the potential for civilian harm before approving or effectuating strikes. In so doing, they must continuously qualitatively balance the military necessity of strikes against humanitarian concerns, take all feasible precautions to avoid or at least minimize civilian harm, and make ethical and legal judgements about what constitutes “excessive” harm in real time, often under intense pressure and time constraints.Footnote 40 This article’s premise aligns with Wright’s conclusion that there is “no bright-line rule” for determining what constitutes excessive civilian harm in relation to anticipated military advantage.Footnote 41

Moreover, pure “objectivity” in these assessments is not required, possible, or necessarily desirable. The International Committee of the Red Cross (ICRC) Commentary on the Additional Protocols explains that the proportionality test is inherently subjective, granting commanders a “fairly broad margin of judgement”, and emphasizes that it should be guided primarily by “common sense and good faith”.Footnote 42 As Cohen and Zlotogorski observe, proportionality is so context-dependent that no uniform standard can be applied; each situation must be assessed individually.Footnote 43 This means that, in practice, a military decision-maker must carry out a case-by-case, context-sensitive evaluation. Such assessments rely on legal reasoning, ethical judgement and situational awareness.Footnote 44 Years of experience and levels of IHL training play a significant role in leading to reasonableness of decision-making, as does access to well-trained legal advisers.Footnote 45

The standard of the “reasonable military commander” reflects the way in which the broader legal concept of reasonableness is enshrined in IHL.Footnote 46 Henderson and Reece offer a deeper examination of the historical and judicial development of the term, according to which commanders, drawing on their experience and training, make proportionality judgements based on the information reasonably available to them at the time.Footnote 47 However, this subjectivity is not without limits; it must align with what a hypothetical reasonable commander would have concluded under similar circumstances, grounding the assessment in both legal norms and practical military realities. This leads Henderson and Reece to conclude that the reasonable military commander standard reflects an “objective but qualified” approach.Footnote 48 The standard acknowledges the complexities and uncertainties inherent in real-time decision-making during armed conflict, embedding a degree of subjectivity that reflects the commander’s training, situational awareness, experience, rationality, honesty and good-faith intent in interpreting ways to avoid or in an any case minimize harm to civilians while allowing for effective achievement of concrete and direct military advantage as a result of targeting decisions.Footnote 49 Other factors, including checklists or auxiliary means of intelligence setting out requirements of information necessary to make decisions within the MDMP, can help to ensure a higher degree of objectivity in reaching the standard of reasonableness for military commanders, but they cannot be a substitute for context-appropriate human judgement and control.Footnote 50

In other words, once it must be assessed, proportionality must be evaluated through a comprehensive harm lens – one that accounts not only for civilian fatalities but also for “injury to civilians”, including mental harm, displacement, loss of infrastructure and livelihood, and damage to cultural or environmental assets. These considerations are not reducible to mere data points,Footnote 51 and proportionality cannot be determined through fixed arithmetic or algorithmic values.Footnote 52 To operationalize this understanding, military training on the application of proportionality must emphasize analogical reasoning (encapsulated in the notion of the “reasonable military commander”) over rigid computational thresholds. Rather than quantifying civilian harm through abstract metrics, training modules should be built on comparison-based exercises drawn from past cases that emphasize the integrated approach outlined above: setting ex ante civilian harm expectations against actual ex post effects, informed by experiential knowledge and moral reasoning.Footnote 53 All of these are areas in which human judgement excels. Some might argue that large language models (LLMs), with their strong pattern recognition abilities, could perform these tasks better than humans,Footnote 54 but studies show that these models often miss subtle contextual meanings and that their performance declines further when they are compressed or made more efficient.Footnote 55 This suggests that context-sensitive human judgement and control cannot technically be replaced by LLMs or AI-enabled systems. Moreover, even using them merely to assist humans carries risks, some of which are outlined below. Militaries working to integrate these systems within targeting operations should be aware of these risks and work to eliminate or minimize them; specifically in this instance, introducing LLMs to targeting cycles could create situations of over-reliance, leading to cognitive shifting, where human judgement deteriorates due to excessive trust in AI outputs. This too is discussed in more detail below.

While proportionality assessments are challenging, particularly for commanders under pressure, this difficulty reflects the high-stakes nature of the decisions involved. As Gillard observes, setting precise qualitative parameters may be difficult, but the determination is not impossible in practice.Footnote 56 Proportionality must be understood as a case-by-case, context-specific balancing exercise, not a fixed metric reducible to algorithmic modelling.Footnote 57 In the context of targeting operations, examined in the next section, this lens helps to reveal how such balancing is to be operationalized when commanders must weigh proportionality.

The JTC and proportionality assessments

Targeting procedures provide a structured methodology through which military forces identify, analyze and engage targets while complying with operational, legal and ethical obligations.Footnote 58 While the precise formulations vary, many States and alliances use comparable targeting models. For example, both the United States and the North Atlantic Treaty Organization (NATO) employ a JTC consisting of six non-linear phases, and while the terminology and sequencing occasionally differ (NATO’s phase 1, for instance, explicitly incorporates the commander’s intent, objectives and targeting guidance), the underlying logic and functions of each step largely overlap. Likewise, other conceptual frameworks (such as “find, fix, track, target, engage, and assess”Footnote 59) capture much of the same process. Because the US JTC is publicly articulated, doctrinally detailed and broadly mirrored in NATO doctrine and practice, this section uses it as a representative model to illuminate where and how key decisions, especially those related to proportionality assessments, are made within targeting procedures. That cycle generally includes the following six phases (see Figure 2):

  1. 1. End-state and commander’s objectives: Defining strategic military goals and desired outcomes.

  2. 2. Target development and prioritization: Identifying, verifying/validating and prioritizing targets based on intelligence and mission goals.

  3. 3. Capabilities analysis: Assessing the available strike options and their effectiveness.

  4. 4. Force assignment: Allocating specific military assets (e.g., air strikes, artillery, cyber operations) to engage the target.

  5. 5. Mission execution: Carrying out the targeting operation while ensuring compliance with relevant laws and the rules of engagement.

  6. 6. Assessment: Evaluating the effectiveness of the operation and adjusting for future operations if necessary.

Figure 2. Joint Targeting Cycle. Source: US Department of Defense (DoD), Joint Targeting, Joint Publication 3-60, 28 September 2018, Fig. II-2, available at: www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Joint_Staff/21-F-0520_JP_3-60_9-28-2018.pdf.

AI-DSS are reportedly increasingly being integrated and used at multiple phases of this JTC, particularly in target development and prioritization (phase 2), capabilities analysis (phase 3) and mission execution (phase 5).Footnote 60 However, use of these systems raises concerns about whether human decision-makers can retain cognitive autonomy over targeting decisions when AI-DSS have been integrated, specifically within phase 4, or whether they will become overly reliant on algorithmic outputs because of the quantification logics embedded within and extended by these systems.

The evolution of quantification logics within targeting

The use of AI-DSS marks the latest step in a broader trend in military decision-making toward quantification and data-driven reasoning, especially in relation to how proportionality is assessed. As Gunneflo and Noll argue, this shift is underpinned by a cost-benefit rationality that frames targeting decisions as optimization problems, where the objective is to minimize costs (civilian harm) while maximizing benefits (military advantage).Footnote 61 Rather than viewing proportionality as a subjective, context-sensitive legal judgement, this approach seeks to render it computationally manageable, thereby reinforcing the flawed belief that ethical and legal dilemmas can be resolved through quantifiable metrics.Footnote 62 Though military AI-DSS have only recently gained public attention with their use in Ukraine and Gaza, the elevation of computational models in MDMPs has clear historical precedents. The use of such quantification logics on the battlefield can be traced over several decades and is rooted in the belief that data-driven insights can improve or even perfect decision-making by reducing or eliminating human error and increasing speed and efficiency.Footnote 63 That is to say, AI-DSS technologies did not introduce this logic; rather, they reinforce and expand it, operating at speed and scale within a framework shaped by pre-existing policies, tools and practices. The next section briefly traces the origins and continuations of this framework. This is crucial for understanding how today’s AI-DSS operate within, rather than outside of, entrenched military logics of quantification and optimization. By situating these systems within their broader lineage, we can better identify the assumptions they perpetuate and the constraints they impose on legal and ethical judgement.

Vietnam and Cold War era: The “sci-tech” lens and the McNamara fallacy

The roots of quantification in warfare can be traced back to the Vietnam and Cold War era, a pivotal period in the rise of computational logics.Footnote 64 Emerging data-processing technologies employed by the US military began to shape how complex social and military realities were perceived and acted upon. Advances in systems analysis, statistical modelling and computer-assisted decision-making converged with military strategy, embedding numerical metrics at the heart of operational planning. Schwarz describes this as viewing operations through the “scientific-technological lens”, a method that prioritizes and privileges the quantifiable aspects of war.Footnote 65 This shift reflects a broader faith in data, where the perceived objectivity of quantification overshadows the complex qualitative dimensions of human conflict.Footnote 66

Specifically, the Vietnam War marked a key shift for the United States toward quantification in military decision-making. The introduction of the Hamlet Evaluation System (HES), a data-driven tool measuring pacification progress through metrics like enemy presence and government control, is one example of this.Footnote 67 Though statistically innovative, the HES was undermined by subjective inputs and data manipulation, failing to capture on-the-ground realities. This represented an example of what would become known as the McNamara or quantification fallacy, a critique of the over-reliance on quantifiable data in decision-making processes.Footnote 68 The fallacy arises when decision-makers prioritize measurable variables while disregarding critical qualitative or human factors simply because they are not easily captured through measurable elements. At its core, the fallacy assumes that what cannot be quantifiably measured is irrelevant, which results in flawed reasoning. Particularly in the context of armed conflict, where the environment is highly complex and dynamic, this can produce grave operational, legal and ethical consequences, including increased risks of civilian harm.Footnote 69

In this era, the United States also introduced the term “collateral damage” as a euphemism for civilian harm.Footnote 70 As nuclear tensions rose, so did the use of collateral damage estimation methodologies (CDEMs), or analyses conducted by the military to estimate potential harm to civilians and damage to civilian property during an operation. Initially focused on infrastructure, CDEMs soon shifted toward estimating civilian deaths.Footnote 71 These estimates later formed the basis for non-combatant casualty cut-off values (NCVs), discussed below, which set quantitative thresholds that cognitively influence the interpretation of “acceptable” levels of civilian harm.

Network-centric warfare: Connecting everything to everything else

Building on the United States’ legacy of Vietnam-era quantification, and despite the limitations of those approaches, this evolution in military decision-making continued into the late twentieth century, culminating in the rise of network-centric warfare (NCW), which further embedded data-driven approaches at the heart of operational doctrine.Footnote 72 NCW aims to turn information superiority into combat power by linking sensors, decision-makers and shooters to enable shared awareness, faster decisions, higher operational tempo, and greater lethality, survivability and self-synchronization.Footnote 73 NCW aims to redefine effectiveness through speed, precision and data integration, enabling commanders to act pre-emptively with optimized force configurations.Footnote 74 Within this doctrine, the risk is that legal assessments undertaken by commanders will shift from qualitative, context-sensitive judgements to decision variables optimized by predictive modelling and surveillance data, where speed becomes a critical factor, as explored further in the discussion on AI-DSS below.

The “Global War on Terror”: Predictive analytics, CDEMs and NCVs

With NCW as the foundation, the post-9/11 shift in US military doctrine (mirrored by allied forces during coalition operations) toward counterterrorism and counter-insurgency, combined with rapid technological advances, accelerated the reliance on quantification logics and tools in targeting. The widespread use of armed drones, combined with algorithmic surveillance systems that assess threats and estimate civilian harm,Footnote 75 paved the way for controversial “signature strikes”, targeting individuals based on behavioural “pattern-of-life” analyses rather than confirmed identities.Footnote 76 These strikes reportedly relied on metadata and algorithms to infer threats, often without direct human verification, raising significant IHL concerns about the distinction between civilians and combatants and the obligation to take feasible precautions in attack.Footnote 77 As Schwarz outlines, these technologies reshape human decision-making in war by embedding operators within a technological framework that projects enhanced, data-driven “superhuman” perception and an illusion of moral (and, as argued here, legal) certainty. However, this reliance on algorithmic profiling and quantification logic reduces complex legal and moral judgements to technical processes, enabling “signature strikes” where individuals may be deemed guilty until proven innocent, often only after death.Footnote 78

The United States extended the CDEM highlighted above through a suite of analytical tools during this era.Footnote 79 One of the most widely known, the Collateral Damage Estimation Tool, colloquially known as “Bugsplat”, uses imagery and data to estimate potential civilian harm.Footnote 80 The Population Density Reference Table is a tool that projects likely civilian presence in target areas.Footnote 81 Other systems, such as the Digital Imagery Exploitation Engine and the Digital Precision Strike Suite Collateral Damage Estimation tool, employ algorithms to support strike planning by locating and characterizing targets, conducting choice of weapons and coordinate measurements, estimating collateral damage, and producing output graphics for databases.Footnote 82

These models often omit key factors that are essential for a full understanding of civilian harm, such as the effects of secondary explosions and other reverberating effects.Footnote 83 While these tools project an aura of precision and objectivity, critics contend that they risk reducing human lives to statistical abstractions, obscuring the moral gravity of lethal decisions.Footnote 84 From a legal perspective, such omissions raise IHL concerns, particularly as regards the principles of distinction and proportionality, which require decision-makers to consider all foreseeable effects on civilians. Excessive reliance on algorithmic outputs, particularly in high-tempo decision-making environments, risks sidelining the qualitative and context-specific assessments required under IHL. The speed and efficiency introduced by AI-DSS can create pressure to act quickly, often at the expense of the careful, time-consuming evaluation needed to foresee, assess and minimize civilian harm. This tension between faster decision cycles and the legal obligation to account for all reasonably foreseeable harm can lead to systematic under-estimation of civilian risk.Footnote 85 Accordingly, human judgement must remain central. Quantitative tools must aid, not replace, the deliberate weighing of civilian risk mandated by IHL. Decisions informed by AI-DSS should be complemented by other intelligence sources and contextual analysis, even when doing so slows the decision-making process. Preserving this deliberative space is essential to ensuring compliance with the principles of distinction, precaution and proportionality.

A final example of quantification logic is the use of NCVs, or policy thresholds that define the maximum number of civilian deaths that a commander is authorized to accept as “collateral damage” in a proposed strike before requiring higher-level approval. The United States introduced these metrics in 2011 to streamline proportionality assessments. NCVs sometimes set the threshold as low as zero, effectively prohibiting civilian casualties without senior authorization.Footnote 86 During Operation Inherent Resolve, NCVs varied by target type and urban density – explicit thresholds of thirty were reported to be applied in operations like Iraqi Freedom and the targeting of Osama Bin Laden.Footnote 87

As Corn explains, NCVs were not intended to determine what level of civilian harm is legally acceptable or proportional under IHL; rather, they were designed to establish cognitive and procedural thresholds that prompt commanders to escalate decisions.Footnote 88 Yet simply setting numerical limits shapes how proportionality is assessed, effectively framing when anticipated civilian harm warrants heightened scrutiny relative to expected military advantage.

NCVs function as policy tools that may have the effect of overriding nuanced deliberation, reducing complex legal and ethical reasoning to mathematical or algorithmic calculations. They effectively codify an acceptable level of civilian deaths and risk anchoring that standard into any legal analysis of incidental harm.Footnote 89 By framing anticipated civilian harm narrowly, as a quantifiable, acceptable cost outweighed by military advantage, NCVs risk distorting core IHL principles, reflecting a fundamental inversion of the primacy of the duty of constant care and precautions in attack and subordinating these legal requirements to proportionality logic. This approach risks shifting civilian protection from a paramount obligation to a negotiable or even optional factor, normalizing the perception that civilian harm is routine and unavoidable.

By introducing a fixed numerical threshold for acceptable civilian casualties, the metric also imposes a veneer of objectivity on a judgement that is inherently context-sensitive and qualitative.Footnote 90 Although the intention is for this quantitative benchmark to guide decisions as they scale up through the chain of command, the anchoring effect of a numerical value (discussed in more depth below) exerts significant cognitive influence within MDMPs. The broader integration of AI-DSS risks accelerating this trend, which could further undermine efforts to prevent or minimize civilian harm. Though details are scarce, the Pentagon had officially abandoned NCVs in its doctrine by 2018, citing their ineffectiveness; however, their legacy continues to shape allied practices and military operations today.

The modern era: AI-DSS, quantification and the JTC

Many militaries worldwide are developing AI-DSS and, in some cases, deploying them in active conflicts.Footnote 91 These systems have evolved beyond basic computational tools into highly advanced technologies that collect and analyze vast amounts of battlefield data, generate predictive models, and provide information that can assist in making targeting decisions. Their capabilities include data synthesis, as they can rapidly sift through satellite imagery, drone footage, intelligence reports and signals intelligence to assist in demarcating patterns that help militaries to evaluate battlefield conditions and increase situational awareness.Footnote 92 Machine-learning algorithms that can be fed into AI-DSS can be programmed to identify movement patterns, target profiles or “threats” at greater speed, and therefore scale, than human analysts. And finally, in some scenarios, AI models can simulate potential battle scenarios, feed into CDEMs and offer suggestions for optimization of strike strategies.Footnote 93 AI-DSS can analyze vast intelligence inputs, detect patterns and generate an overview of potential risks,Footnote 94 using predictive models to estimate enemy behaviour, civilian harm and operational outcomes based on historical data.Footnote 95 Despite all these purported benefits, however, these systems also introduce a host of new complexities and risks, as discussed in the following section.

Speed, scale and shifts in cognitive abilities: Risks of AI-DSS integration in the JTC

Many of the advancements described above, reflecting the principles of NCW, aim to enhance decision-making “efficiency” through the rapid speed and scale enabled by AI-driven data outputs.Footnote 96 In the author’s view, the integration of AI-DSS accelerates the shift from context-dependent qualitative judgements to reliance on quantified decision variables, seeking to streamline or replace complex human judgement with automated processes that promise greater speed, consistency and objectivity, all the while raising the risks of falling into the quantification fallacy outlined above. AI seems to be valued in wartime for accelerating target formation and eliminating the “human bottleneck” that slows decision-making, reinforcing the shift toward treating proportionality as an optimizable variable rather than a nuanced legal judgement.Footnote 97 A reliance on AI-driven outputs risks obscuring the inherently qualitative, context-specific considerations that proportionality assessments demand, such as civilian presence, environmental factors and the unpredictable effects of weapon use. By prioritizing speed and computational efficiency, these systems can inadvertently anchor decision-making to numerical or algorithmic metrics, reinforcing the very quantification fallacy that undermines nuanced legal and ethical judgement. Gunneflo and Noll describe this as a quest to streamline information flow,Footnote 98 while Woodcock warns that AI-DSS are being used to “smooth over friction points in human judgment”,Footnote 99 encouraging uncritical reliance on model outputs. In other words, in the drive to “fight at machine speed”,Footnote 100 human decision-makers, limited by cognitive constraints that slow processes, are deliberately pushed to the margins.

Speed and scale: Risks of reinterpreting IHL through AI-DSS

Fighting at machine speed introduces risks and raises important questions about the trade-offs involved. Trying to embed ethical and legal judgements, or portions thereof, into formulas and algorithms (arguably not even possible from a technical engineering perspective),Footnote 101 risks stripping away the qualitative context needed for conducting proportionality assessments. As Pratzner observes, while lower-level functions might be able to partially be automated, core tasks such as target vetting, validation and target nomination – all forming part of a reasonable commander’s judgement – demand time and deliberation.Footnote 102 More generally, there are concerns about the role of these systems and the structure of human–machine teaming within the JTC regarding human cognitive agency over the MDMP.Footnote 103 Specifically, and most relevant to this article, the use of AI-DSS raises critical concerns about how these systems may be reshaping the cognitive processes through which commanders make decisions.Footnote 104

Unlike the full spectrum of complex human reasoning, which includes deductive and inductive thinking, decision-making, and problem-solving,Footnote 105 AI-DSS rely on more constrained computational approaches. Machine-learning models, including transformer-based neural networks designed to process large volumes of sequential data such as text (e.g., LLMs), operate within these narrower, task-specific parameters. They can process entire sequences in parallel, enabling greater efficiency and speed, but they function through probabilistic modelling and operate within predefined parameters set by their design and intended purpose, even if shaped by human input.Footnote 106 Because these systems are inherently limited in their ability to engage with the full range of contextual, legal, moral and experiential factors necessary for “a thorough assessment in good faith of all the different components of [a given] rule as well as the circumstances at the time”,Footnote 107 algorithmically derived recommendations can contribute to a false veneer of objectivity and precision, reinforcing the illusion of quantifiable and data-driven accuracy in targeting decisions. This illusion is amplified by the speed and scale at which such systems are able to operate, forcing humans to keep pace and increasing the risk of automation bias and over-reliance on system outputs in the pursuit of technologically enabled “clean” warfare – which, as Assaad and Williams highlight, is a fundamentally unattainable aspiration.Footnote 108

Over-reliance on AI-DSS may also encourage commanders to progressively disengage from the exercise of critical judgement, leading to cognitive offloading of their tasks and ultimately deskilling.Footnote 109 This disengagement will be reinforced as decision-makers increasingly defer to AI outputs, trusting the system’s apparent rigour over their own nuanced judgement. Over time, this can weaken their ability to critically evaluate contextual factors, interpret ambiguous information or recognize subtle cues that a computational approach cannot capture.Footnote 110 The result is a risk of gradual erosion of domain-specific expertise, leaving human operators less capable of intervening effectively when algorithmic recommendations conflict with legal, ethical or operational considerations.

Rather than drawing upon their own training, operational experience, contextual understanding and sensitivity to legal and moral nuance, military decision-makers risk functioning as mere endorsers of machine-generated outputs so complex that they are impossible to trace or understand, thereby diminishing the role of human agency and responsibility in decisions.Footnote 111 This carries profound ethical and legal consequences and ultimately risks rendering the commander unable to act based on their own decisions and in accordance with IHL, particularly in conducting a lawful proportionality assessment.

When programmed into AI-DSS that assist in calculating proportionality, such a numerical approach encouraged by quantification logics risks becoming self-reinforcing, as the technology can lend a false sense of precision and a veneer of legality to decisions that, in reality, demand qualitative human judgement. This process effectively shifts decision-making away from human commanders, through their own cognitive faculties, and towards a reliance on system-generated outputs, fostering a culture in which suggestions by the AI-DSS risk being “rubber-stamped”Footnote 112 by the human rather than contextually qualitatively attained or critically interrogated.

While commanders have long relied on tools such as CDEMs to support proportionality assessments, these tools can only ever provide a partial view. As experts emphasize, interpreting CDEM outputs depends on practised judgement, intuition, and sound legal and ethical reasoning.Footnote 113 However, the growing use of AI-DSS in the JTC due to aspirations for an ever-increasing tempo in the MDMP risks shifting this balance away from human deliberation and towards increasingly data-driven processes. While AI-DSS can assist human decision-makers by providing rapid data analysis, probabilistic predictions and pattern recognition, they remain inherently limited in capturing the context-specific, legal and ethical nuances that are critical to proportionality assessments. As such, they can at best support, but never replace, the exercise of informed human judgement in targeting decisionsFootnote 114 – and it is essential to be aware that even as assistive tools, these systems introduce risks, including automation bias, over-reliance on algorithmic outputs, and cognitive offloading, which can progressively erode operators’ critical reasoning and domain expertise.

Systemic risks that AI-DSS may pose to IHL interpretation in targeting

The ICRC’s 2024 report on International Humanitarian Law and the Challenges of Contemporary Armed Conflicts highlights that the erosion of protective elements of IHL, coupled with the introduction of new technologies, creates systemic risks for civilians.Footnote 115 Tools that rely on preset numerical thresholds or predictive algorithms can reinforce these risks by prioritizing speed, scale and computational efficiency over context-sensitive, case-by-case legal judgement. When proportionality and distinction are reduced to calculable metrics, even in the absence of case-specific examples, there is a heightened danger that civilians may be exposed to harm which could otherwise have been mitigated through nuanced human assessment.

Operationally, such practices risk recalibrating the application of IHL principles. When high levels of incidental harm are pre-approved through elevated NCVs, especially for lower-level targets, the primacy of the duty of constant care and the precautionary obligation to avoid or minimize civilian harm become displaced. Because targeting is an iterative process, decisions made in phase 4 of the JTC influence earlier steps in subsequent cycles, including CDEM modelling in phase 3. As a result, “acceptable” levels of harm may progressively shift upward, normalizing greater civilian harm and lowering the evidentiary threshold for target verification. This would disregard or even circumvent the duty of constant care under Article 57 of AP I, sidelining precautionary obligations in favour of proportionality assessments and thereby recasting the MDMP itself.

The integration of quantification logics into AI-DSS illustrates the socio-technical reality that operational tools shape how legal and ethical norms are engaged in practice.Footnote 116 Efficiency, speed and data optimization risk outweighing humanitarian considerations, suppressing critical scrutiny of algorithmically generated outputs. Rather than providing a genuine legal justification, reliance on numerical thresholds can create the illusion that decisions are legally or ethically grounded simply because they are algorithmically derived. This reliance risks sidelining or obscuring the careful, context-sensitive legal assessments that IHL requires, leaving human operators dependent on computational outputs while potentially neglecting critical legal considerations. As numerical thresholds become routine, they institutionalize the notion that civilian harm is inevitable and manageable through algorithmic means, further entrenching this reliance.Footnote 117

As AI-DSS systems shape and constrain the informational environment in which commanders operate, the subjective dimensions of proportionality assessments risk being sidelined.Footnote 118 In terms of shaping and constraining, AI-DSS may present a commander with a limited set of recommended courses of action or nominated targets without revealing how many alternatives exist, how the system generated or ranked them, or whether they are truly optimal. In this sense, decision-making can resemble “looking at the world through a straw”, where the commander’s perspective and options are constrained by the system’s narrow and opaque outputs. Incidental civilian harm is codified into numerical tolerances, via NCVs, predictive algorithms or both, reflecting a broader “datafication” of warfare that threatens to erode IHL’s protective aims.Footnote 119 In this setting, human commanders shift from being central legal decision-makers to mere overseers of algorithmic processes. Preserving the integrity of proportionality as a legal standard requires critically examining how AI-DSS and related practices are reshaping the balance between human judgement and machine-driven analysis in armed conflict. Unlike AI-DSS parameters, human cognition allows for situational flexibility, adapting to unforeseen circumstances, interpreting ambiguity and applying legal reasoning in complex scenarios.Footnote 120 Conversely, unlike human cognition, AI systems are ill-equipped to grapple with moral and ethical deliberation, interpret ambiguity, or assess adversary intent and strategic objectives.

From a technical standpoint, algorithmically guided systems will always lack the flexibility to engage with the qualitative dimensions of military decision-making. Their reliance on deterministic data processing limits their ability to navigate uncertainty, deception or incomplete intelligence, factors that are often central to operational judgement.Footnote 121 As Greipl notes, AI’s “forecasting” is based on data analytics of past behaviours and lacks context and human logic.Footnote 122 This renders AI systems unable to properly scrutinize, as required by IHL, whether a proposed objective is a lawful target or whether anticipated incidental harm will outweigh the expected military advantage under the circumstances prevailing at the time. While the predominant worry outlined in this article relates to how AI-DSS may skew proportionality analyses, their potential to undermine distinction is equally significant. Given that misidentification is a primary driver of civilian harm, designers, developers and deployers need a clear understanding of these systems’ error tendencies and must adopt measures to avoid, or at minimum mitigate, such misrecognition, as this understanding also feeds into proportionality assessments, should they become relevant.Footnote 123

AI-DSS are inherently constrained by the scope and quality of their training data.Footnote 124 These limitations mean they are poorly equipped to navigate environments marked by uncertainty, incomplete intelligence or ethically complex dilemmas, all three of which are ubiquitous within the context of armed conflict. In contrast, military commanders are assumed to have the training, capacity and legal obligation to interpret ambiguous information and exercise legal and moral judgement in nuanced and context-dependent situations.

Cognitive impact of AI-DSS: Automation bias, anchoring, offloading and deskilling

Gunneflo and Noll argue that historically, proportionality reasoning has functioned as a mechanism through which new technologies are lawfully integrated, largely through human interpretive judgement. Past innovations left the legal decision-making process intact, with humans still applying existing law. Digital decision support in the military, however, marks a different kind of shift: it enters the cognitive process of legal reasoning itself, potentially displacing the human role to the margins and creating a more direct link between law and technology. AI-DSS, Gunneflo and Noll contend, represent a technological leap that works from within, modifying or even displacing human judgement in unprecedented ways.Footnote 125 Woodcock importantly highlights that while human decision-making is not free from bias, the introduction of AI-DSS brings new forms of bias and alters the very structure of decision-making.Footnote 126 This shift reflects a broader cognitive transformation in warfare, one that risks marginalizing human judgement and displacing it from its central role in lawful and ethical military operations.

Concretely, the integration of AI-DSS into the JTC can introduce biases that can affect a commander’s ability to conduct legally sound proportionality assessments. These systems can subtly transfer decision-making authority from the performance of human judgement to the acceptance of algorithmic output, with knock-on effects for accountability and legal responsibility. The risk of (partial) cognitive offloading – or transferring reasoning tasks to external systems and reducing the mental effort required for decision-making – arises from designs aimed at efficiency and reducing “friction points”, as Woodcock describes.Footnote 127 Because AI-DSS can rapidly process large volumes of data and generate recommendations, commanders may be tempted to assume that the system has accounted for all necessary variables, creating a feedback loop of growing dependence on AI tools. This could raise questions of legal compliance and create operational and strategic risks including erosion of human oversight, accountability gaps, operational dependency and heightened legal exposure.Footnote 128 Mitigating these risks requires sustained training in, and consideration of, IHL compliance at every stage of system design, development, deployment and decommissioning, advancing a model of responsibility or legality by design.Footnote 129

Potential biases introduced by AI-DSS

One way in which cognitive offloading manifests in proportionality assessments during the JTC is through automation bias – that is, the tendency to place undue trust in AI recommendations without critical scrutiny.Footnote 130 This bias can lead to legally non-compliant decisions, especially when AI systems produce errors or uncertainties such as miscalculating risk, omitting qualitative factors or misidentifying a target.Footnote 131 In the JTC, a risk arises when vast targeting lists are generated: the speed of operations can create time pressure that leads to skipping target validation or verification altogether.Footnote 132 Treating AI-DSS outputs as optimized solutions can erode commanders’ critical engagement and heighten automation bias, especially in the later JTC phases (4 and 5) where precautions and proportionality must be (re)assessed under evolving conditions.Footnote 133 Simply put, at machine speed, the space for moral and contextual legal reasoning shrinks, creating the risk that human oversight will become little more than a procedural rubber stamp.

Anchoring bias is another cognitive shift that AI-DSS can introduce, directly affecting proportionality assessments. Anchoring occurs when an initial piece of information disproportionately influences subsequent decision-making. In target selection and nomination, for example, an algorithm might place a certain individual at the top of a targeting list, anchoring perceptions of that target’s military value.Footnote 134 Similarly, casualty or damage estimates from CDEMs in phase 3 may lead commanders in phase 4 to subconsciously adjust their judgements around this initial figure (even when new information suggests a different conclusion) despite their legal obligations to take all feasible precautions in attack in order to avoid or in any event minimize civilian harm.

NCVs are a concrete example of anchoring bias in practice. Once commanders internalize a set threshold of “acceptable” civilian harm for higher approval, it can shift what is perceived to be legally necessary or feasible under the duties of precaution and constant care. When proportionality becomes a numerical exercise, where figures like twenty, thirty or even hundreds of civilian deaths are weighed against eliminating a single target, it raises fundamental questions about whether the principles of precautions in attack and proportionality are truly being upheld, and whether such an approach is compatible with IHL’s object and purpose of limiting the effects of armed conflict by protecting civilians and restricting means and methods of warfare.

A final cognitive concern is deskilling or (meta)cognitive erosion, which is the gradual loss of a commander’s ability to conduct complex assessments when repeatedly following AI recommendations without exercising independent judgement. Empirical evidence has shown that increased reliance on automated support erodes individuals’ knowledge and undermines their confidence in making independent decisions.Footnote 135 For example, deskilling through over-reliance on AI has recently come to the attention of the medical field, with one study showing that doctors’ ability to detect cancer devolved through AI-enabled decision support.Footnote 136 This risk is not limited to the medical field and will be present across all sectors where these technologies are integrated.Footnote 137 Translated to the military context, over time, this raises serious questions about long-term operational readiness and the capacity to make critical decisions without AI assistance.Footnote 138

Returning to the earlier discussion on aligning CDEMs and battle damage assessment and after-action reports for a fuller understanding of civilian harm, in practical terms, during phase 6 of the JTC, the duties of constant care and of taking precautions in future attacks require comparing anticipated harm with actual effects and feeding that information into subsequent proportionality assessments. If commanders lose this evaluative capacity through over-reliance on algorithmic tools, whether by failing to incorporate a comprehensive understanding of civilian harm into battle damage assessments or after-action reporting or by skipping or condensing these steps under operational time pressure, cognitive offloading and deskilling can occur. This not only risks non-compliance with the proportionality rule but also undermines compliance with rules related to the precautionary principle and the duty of constant care.Footnote 139

Conclusion: Human(e) judgement, AI-DSS and proportionality assessments

This author has argued, along with others elsewhere, that framing AI-DSS as mere tools has led to an under-estimation of their impact on cognitive decision-making within the JTC.Footnote 140 Limited transparency about AI-DSS design and use, coupled with the absence of focused scrutiny, has left a critical gap in understanding. The international debate’s persistent focus on autonomous weapon systems (AWS) has further obscured the growing influence of AI-DSS, which lack a comparable regulatory forum. To address this, it is essential to reassert the primacy of human legal and ethical reasoning in targeting decisions, drawing from lessons on human–machine interaction and AWS governance. Greater awareness of how AI-DSS affect human cognition and deliberation must be promoted to advance inclusive, informed debate on the risks and structural shifts that these systems introduce.

To ensure compliance with IHL’s protective aims, the principles of distinction and precautions in attack must remain primary, with proportionality assessments (which also include precautionary requirements) only undertaken as a final safeguard once all feasible civilian harm avoidance or reduction measures are exhausted. Proportionality is not a formula; it requires contextual, interpretive judgement grounded in legal reasoning, operational realities and humanitarian concerns. AI-DSS can inform but never replace this process. Deliberative legal spaces, meaning the points in the targeting process at which legal decisions must be made, should be built into JTC workflows to ensure that these decisions rely on qualitative, precedent-informed analysis rather than machine-speed processing.

Training must emphasize analogical reasoning over numerical thresholds, approaching civilian harm as a broad multidimensional reality. Civilian perspectives, gathered through consequence reviews, red teaming,Footnote 141 community impact reports and partnerships with non-governmental organizations, should be incorporated into post-strike analysis to ensure that lived experiences shape future precautions and proportionality assessments. Aligning ex ante civilian harm estimates with ex post assessments enables lessons from past operations to refine future decision-making and adjust methods where harm is consistently over- or under-estimated.

Integrating AI-DSS into targeting operations requires considering and designing for legal and moral discretion and compliance at every stage of the AI system life cycle through legal reviews and safety assurance frameworks around human–machine teaming, preserving cognitive friction and space and time for decision-making rather than engineering them out. This means resisting system designs that accelerate decision-making to the point where opportunities for critical assessment are reduced or eliminated. Military decision-makers require moments of deliberation to evaluate intelligence, scrutinize algorithmic recommendations and apply IHL principles. Instead of engineering these pauses out in the name of efficiency or tempo, AI-DSS should be built to maintain, and, where necessary, create space for reflection and challenge. Addressing this may mean embedding human decision checkpoints, tailoring timelines to operational risk and slowing down high-stakes proportionality assessments, somewhat countering the speed incentives of AI-DSS integration. In essence, the use of AI-DSS in targeting procedures may offer operational advantages, but militaries and other stakeholders must also be aware of the new risks they introduce through shifting the choice architecture in the MDMP. Training must address not only system operation but also the risks of automation bias, cognitive offloading, over-reliance and deskilling.

In practice, AI-DSS shape, not just support, targeting decisions. Global and national fora, such as the UN General Assembly or the Global Commission on Responsible AI in the Military Domain (GC REAIM), must expand the debate beyond AWS to address AI-DSS’s systemic influence on precautions, proportionality and wider legal obligations. In short, responding to the growing influence of AI-DSS in targeting demands more than technical adjustments – it requires reaffirming the human(e) centre of lawful decision-making. Proportionality assessments are contextually based legal and moral judgements that must remain rooted in qualitative human reasoning, experience and accountability. Ultimately, preserving the integrity of IHL in the age of AI will depend not on how advanced the systems become, but on how firmly stakeholders insist that targeting decisions remain rooted in context-appropriate human judgement: legally grounded, morally aware, and irreducible to algorithmic logic.

Footnotes

*

The author expresses her sincere gratitude to the Utrecht University Public International Law Honours Programme Clinic students who provided background research assistance to this piece, which is an expanded version of a policy note that the author wrote for her work as an Expert Member of the Global Commission on Responsible AI in the Military Domain entitled Proportionality under Pressure: AI-Based Decision Support Systems, the Reasonable Commander Standard and Human(e) Judgement in Targeting. For the drafting of this article, the author acknowledges the use of ChatGPT as an editorial tool focused on making existing textual passages more succinct or clearer, much in the way that a tool like Microsoft Word’s grammar check can help with the same functions; the author reviewed and edited any suggested content provided by ChatGPT before integration. Finally, the author also wishes to thank Zena Assaad, Cedric Ryngaert, Elke Schwarz and Taylor Woodcock for their incisive and insightful feedback in earlier draft stages. The author remains fully responsible for the final content of this article.

The advice, opinions and statements contained in this article are those of the author/s and do not necessarily reflect the views of the ICRC. The ICRC does not necessarily represent or endorse the accuracy or reliability of any advice, opinion, statement or other information provided in this article.

References

1 See Marta Bo, Ingvild Bode, Jessica Dorsey and Elke Schwarz’s section on views of members of the scientific community and civil society pursuant to Resolution 79/239, in Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security: Report of the Secretary-General, UN Doc. A/80/78, 5 June 2025, pp. 139–143 available at: https://docs.un.org/en/A/80/78 (all internet references were accessed in December 2025).

2 North Atlantic Treaty Organization (NATO), Allied Joint Doctrine for Joint Targeting, Allied Joint Publication 3.9, Edition B, version 1, November 2021, available at: https://assets.publishing.service.gov.uk/media/618e7da28fa8f5037ffaa03f/AJP-3.9_EDB_V1_E.pdf; US Department of Defense (DoD), Joint Chiefs of Staff, Joint Targeting, Joint Publication 3-60, 28 September 2018, pp. ix–xi, available at: www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Joint_Staff/21-F-0520_JP_3-60_9-28-2018.pdf. As outlined in Jessica Dorsey and Marta Bo, “AI-Enabled Decision-Support Systems in the Joint Targeting Cycle: Legal Challenges, Risks and the Human(e) Dimension”, International Law Studies, Vol. 107, 2026, p. 137, “different States employ different doctrines for targeting. What is important … is not necessarily the specific labels for various steps followed by any given State, but rather how and when various legal principles and rules (specifically, … precautions in attack) are operationalized into the targeting process.”

3 J. Dorsey and M. Bo, above note 2.

4 Herwin Meerveld, Roy Lindelauf, Eric Postma and Marie Postma, “The Irresponsibility of Not Using AI in the Military”, Ethics and Information Technology, Vol. 25, No. 14, 2023.

5 Several authors have invoked variants of the term “quantification logics” when analyzing ethical and legal dimensions of autonomous and AI-enabled systems. These authors include Elke Schwarz, “The Hacker Way: Moral Decision Logics with Lethal Autonomous Weapons Systems”, in Henning Glaser and Pindar Wong (eds), Governing the Future: Digitalization, Artificial Intelligence, Dataism, CRC Press, Boca Raton, FL, 2025; Amichai Cohen and David Zlotogorski, Proportionality in International Humanitarian Law: Consequences, Precautions and Procedures, Oxford University Press, Oxford, 2021; Taylor K. Woodcock, “Human/Machine(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard”, Global Society, Vol. 38, No. 1, 2024; Markus Gunneflo and Gregor Noll, “Technologies of Decision Support and Proportionality in International Humanitarian Law”, Nordic Journal of International Law, Vol. 92, No. 1, 2023.

6 Taylor K. Woodcock, “Human-Machine (Learning) Interactions: War and Law in the AI Era”, PhD thesis, University of Amsterdam, 2026 (forthcoming) (on file with author). Woodcock builds this notion out from Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness, Yale University Press, New Haven, CT, 2008, p. 3.

7 In this article, adopting Woodcock’s approach, moral decisions are situated as an inherent part of IHL assessments. Analysis regarding legal interpretation underscores that while IHL is not reducible to morality or ethics, it nonetheless demands judgements about the value of human life, which is a fundamentally moral concern. T. K. Woodcock, above note 5.

8 See e.g. Group of Governmental Experts on Lethal Autonomous Weapons Systems, “Rolling Text”, 12 May 2025, available at: https://tinyurl.com/3dfdrytd.

9 Neil Renic and Elke Schwarz, “Crimes of Dispassion: Autonomous Weapons and the Moral Challenge of Systematic Killing”, Ethics and International Affairs, Vol. 37, No. 3, 2023, p. 321; J. Dorsey and M. Bo, above note 2; T. K. Woodcock, above note 6.

10 Robin Vanderborght and Anna Nadibaidze, “Military Demonstrations as Digital Spectacles: How Virtual Presentations of AI Decision-Support Systems Shape Perceptions of War and Security”, European Journal of International Security, 2025 (forthcoming), p. 3, available at: https://tinyurl.com/2swtzbbh.

11 Lucy Suchman, “Patterns of Life: AI and ‘Actionable Data’ in Warfare”, Los Angeles Review of Books, 11 January 2020, available at: https://lareviewofbooks.org/blog/provocations/patterns-life-ai-translates-human-activities-actionable-data-war/.

12 Protocol Additional (I) to the Geneva Conventions of 12 August 49, and relating to the Protection of Victims of International Armed Conflicts, 1125 UNTS 3, 8 June 77 (entered into force 7 December 78) (AP I), Art. 51(5)(b).

13 T. K. Woodcock, above note 6; N. Renic and E. Schwarz, above note 9; Roy Lindelauf and Herwin Meerveld, “Building Trust in Military AI Starts with Opening the Black Box”, War on the Rocks, 12 August 2025, available at: https://warontherocks.com/2025/08/building-trust-in-military-ai-starts-with-opening-the-black-box/.

14 Emanuela-Chiara Gillard, Proportionality in the Conduct of Hostilities: The Incidental Harm Side of the Assessment, Chatham House Research Paper, London, December 2018, p. 20.

15 AP I, Art. 57; see also Geoffrey S. Corn and Tyler R. Smotherman, “Improving Compliance with International Humanitarian Law in an Era of Maneuver War and Mission Command”, SMU Law Review, Vol. 1, No. 78, 2025, p. 33.

16 AP I, Arts 51(5)(b), Art. 57(2)(a)(iii).

17 NATO, Protection of Civilians Allied Command Operations Handbook, 2021, available at: https://shape.nato.int/resources/3/website/ACO-Protection-of-Civilians-Handbook.pdf; Jessica Dorsey and Luke Moffett, “The Warification of International Humanitarian Law and the Artifice of Artificial Intelligence in Decision-Support Systems: Restoring Balance through the Legitimacy of Military Operations”, Yearbook of International Humanitarian Law, Vol. 27, 2024 (forthcoming 2026); G. S. Corn and T. R. Smotherman, above note 15; Michael McNerney and Matthew Isler, “Operational Effectiveness and Civilian Harm Mitigation by Design”, Military Review, January 2025, available at: www.armyupress.army.mil/Portals/7/military-review/Archives/English/Online-Exclusive/2025/Operational-Effectiveness/civilian-harm-mitigation-UA.pdf.

18 Larry Lewis, Sabrina Verleysen, Samuel Plapinger and Marla Keenen (with contributions from Anna Williams), Preparing for Civilian Harm Mitigation and Response in Large-Scale Combat Operations, Center for Naval Analyses, Arlington, VA, August 2024, p. 8.

19 Jeroen van den Boogaard, Proportionality in International Humanitarian Law: Refocusing the Balance in Practice, Cambridge University Press, Cambridge, 2023, pp. 342 ff.

20 Luigi Daniele, “Incidentality of the Civilian Harm in International Humanitarian Law and its Contra Legem Antonyms in Recent Discourses on the Laws of War”, Journal of Conflict and Security Law, Vol. 29, No. 1, 2024; Aurel Sari, “Indiscriminate Attacks and the Proportionality Rule: What Is Incidental Civilian Harm?”, Journal of Conflict and Security Law, Vol. 30, No. 2, 2025.

21 E.-C. Gillard, above note 14, pp. 27 ff.

22 PAX, On Civilian Harm, 2021, available at: https://protectionofcivilians.org/on-civilian-harm/; Fionnuala Ní Aolaín, “Cumulative Harm in Gaza: A Gendered View”, Just Security, 25 June 2024, available at: www.justsecurity.org/115407/cumulative-civilian-harm-gaza-gendered-view/; Essex University Initiative, “Essex to Lead Global Project Aimed at Reducing Civilian Harm in War”, 15 June 2023, available at: www.essex.ac.uk/news/2023/06/15/tackling-civilian-harm-during-war; Queens University Belfast Initiative, “Civilian Harm”, available at: https://reparations.qub.ac.uk/civilian-harm/; Megan Karlshøj-Pedersen and Jessica Dorsey, “Policy Recommendations to Meaningfully Mitigate Civilian Harm in Military Operations: A View from the Netherlands (Part I)”, Opinio Juris, 24 May 2024, available at: https://opiniojuris.org/2024/05/24/policy-recommendations-to-meaningfully-mitigate-civilian-harm-in-military-operations-a-view-from-the-netherlands-part-i/; Megan Karlshøj-Pedersen and Jessica Dorsey, “Policy Recommendations to Meaningfully Mitigate Civilian Harm in Military Operations: A View from the Netherlands (Part II)”, Opinio Juris, 24 May 2024, available at: https://opiniojuris.org/2024/05/24/policy-recommendations-to-meaningfully-mitigate-civilian-harm-in-military-operations-a-view-from-the-netherlands-part-ii/; Azmat Khan, “Hidden Pentagon Records Reveal Patterns of Failure in Deadly Airstrikes”, New York Times, 6 January 2022, available at: www.nytimes.com/interactive/2021/12/18/us/airstrikes-pentagon-records-civilian-deaths.html.

23 J. Dorsey and L. Moffett, above note 17.

24 DoD, Civilian Harm Mitigation and Response Action Plan, August 2022, available at: https://media.defense.gov/2022/Aug/25/2003064740/-1/-1/1/CIVILIAN-HARM-MITIGATION-AND-RESPONSE-ACTION-PLAN.PDF; L. Lewis et al., above note 18, p. 8; NATO, above note 17, p. 28.

25 NATO, above note 17, p. 28.

26 M. McNerney and M. Isler, above note 17; Matt Isler, “Operational Effectiveness and Civilian Harm: Mitigation Advances US Interests”, Medium, 4 March 2025, available at: https://matt-isler.medium.com/operational-effectiveness-and-civilian-harm-mitigation-advances-us-interests-b70b6bc98317; Geoffrey S. Corn, “Proportionality: Can’t Live With It, but Can’t Live Without It”, International Law Studies, Vol. 106, 2025, p. 520.

27 Dan Stigall, “The EWIPA Declaration and U.S. Efforts to Minimize Civilian Harm”, Articles of War, 22 May 2024, available at: https://lieber.westpoint.edu/ewipa-declaration-us-efforts-minimize-civilian-harm/; G. S. Corn and T. R. Smotherman, above note 15; G. S. Corn, above note 26; L. Lewis et al., above note 18, p. 1.

28 As articulated in Article 57 of AP I. G. S. Corn, above note 26, pp. 528–532; E.-C. Gillard, above note 14; Alon Sapir, “‘Lies, Damned Lies, and Statistics’: The Legality of Statistical Proportionality”, Just Security, 31 July 2025, available at: www.justsecurity.org/118219/legality-statistical-proportionality/. See also Ceasefire Centre for Civilian Rights, Civilian Harm Mitigation in Large-Scale Combat Operations: Lessons for UK Defence, London, November 2025, pp. 3–4.

29 DoD, above note 24, p. 12.

30 See Théo Boutruche, Expert Opinion on the Meaning and Scope of Feasible Precautions under International Humanitarian Law and Related Assessment of the Conduct of the Parties to the Gaza Conflict in the Context of the Operation “Protective Edge”, 2015, available at: https://apidiakoniase.cdn.triggerfish.cloud/uploads/sites/2/2021/07/expert-opinion-precautions-ihl-operation-protective-edge.pdf. Boutruche states that “[i]n particular notions of feasibility and of effectiveness for advance warnings carry a duty to learn or to take into account past experiences” (p. 50). Additionally, Boutruche argues that reliable and relevant information is essential for deciding on and implementing precautions in attack, and that gathering that information is itself a precaution. In repeated hostilities or occupations, attackers must actively collect and use all available information to assess feasibility and verify targets (p. 51). See also E.-C. Gillard, above note 14, p. 49, where Gillard notes that “[s]ome states’ armed forces conduct battle damage assessments to determine the military impact of operations. As part of this process, some armed forces also consider incidental harm, though at present information on the adverse impact of attacks on civilians is not collected and analysed systematically.”

31 J. van den Boogaard, above note 19, p. 122: “One could say that in this regard the proportionality rule is a secondary rule that only enters the scene when it is impossible to take precautionary measures that are expected to avoid collateral damage completely.” See also Geoffrey S. Corn, “War, Law, and the Oft Overlooked Value of Process as a Precautionary Measure”, Pepperdine Law Journal, Vol. 42, No. 3, 2015, pp. 435–436, in which Corn argues that commanders must first take all feasible precautions to reduce civilian risk under Article 57 of AP I – such as adjusting timing, giving warnings or choosing alternative means – before conducting a proportionality analysis. This both protects civilians and aligns with operational logic, as, he argues, maximizing effects on intended and lawful targets avoids wasted effort. See also T. Boutruche, above note 30, p. 10.

32 Noam Neuman, “Considering the Principle of Precaution”, in Fausto Pocar (ed.), The Additional Protocols 40 Years Later: New Conflicts, New Actors, New Perspectives: Proceedings of the 40th Round Table on Current Issues of International Humanitarian Law, 2017, p. 75, available at: https://iihl.org/wp-content/uploads/2022/07/The-Additional-Protocols-40-Years-Later-New-Conflicts-New-Actors-New-Perspectives_2.pdf.

33 US Center for Army Lessons Learned, The Military Decision-Making Process: Organizing and Conducting Planning, November 2023, available at: https://api.army.mil/e2/c/downloads/2023/11/17/f7177a3c/23-07-594-military-decision-making-process-nov-23-public.pdf.

34 J. Dorsey and L. Moffett, above note 17; ICRC, International Humanitarian Law and the Challenges of Contemporary Armed Conflicts: Building a Culture of Compliance for IHL to Protect Humanity in Today’s and Future Conflicts, 2024, pp. 7–8, available at: www.icrc.org/en/report/2024-icrc-report-ihl-challenges; Lawrence Hill-Cawthorne, International Law in Extremis, Bristol Working Paper Series No. 001/2025, July 2025; G. S. Corn and T. R. Smotherman, above note 15.

35 See J. Dorsey and M. Bo, above note 2; M. Gunneflo and G. Noll, above note 5.

36 ICRC, above note 34, p. 8.

37 J. van den Boogaard, above note 19, pp. 123 ff.; A. Cohen and D. Zlotogorski, above note 5, p. 227.

38 Michael N. Schmitt, “The Principle of Discrimination in 21st Century Warfare”, Yale Human Rights and Development Law Journal, Vol. 2, No. 1, 1999, p. 151.

39 E.-C. Gillard, above note 14; International Criminal Tribunal for the former Yugoslavia, Prosecutor v. Pavle Strugar, Case No. IT-01-42, Prosecutor’s Pre-Trial Brief Pursuant to Rule 65 ter (E) (i), 27 August 2003, para. 152. See also Harvard Program on Humanitarian Policy and Conflict Research, HPCR Manual on International Law Applicable to Air and Missile Warfare, Cambridge University Press, Cambridge, 2013, Rule 14, p. 98, para. 7.

40 The author acknowledges that the element of continuous monitoring of proportionality is not universally accepted. For discussion around this, see e.g. James Kilcup, “Proportionality in Customary International Law: An Argument against Aspirational Laws of War”, Chicago Journal of International Law, Vol. 17, No. 1, 2016. On the other hand, in relation to precaution, the author points out the relevance of the obligation under Article 57(2)(b) of AP I to cancel or suspend attacks if it becomes apparent that they would be disproportionate.

41 Jason Wright, “‘Excessive’ Ambiguity: Analysing and Refining the Proportionality Standard”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 853.

42 Yves Sandoz, Christophe Swinarski and Bruno Zimmermann (eds), Commentary on the Additional Protocols, ICRC, Geneva, 1987, pp. 679, 682.

43 A. Cohen and D. Zlotogorski, above note 5, p. 227.

44 J. van den Boogaard, above note 19, pp. 238–246; Ian Henderson and Kate Reece, “Proportionality under International Humanitarian Law: The ‘Reasonable Military Commander’ Standard and Reverberating Effects”, Vanderbilt Journal of Transnational Law, Vol. 51, No. 3, 2016, pp. 841–842.

45 J. van den Boogaard, above note 19, p. 241.

46 Ibid., p. 239.

47 I. Henderson and K. Reece, above note 44, pp. 839–846.

48 Ibid., p. 846; International Criminal Tribunal for the former Yugoslavia, Prosecutor v. Stanislav Galić, Case No. IT-98-29-T, Judgment (Trial Chamber), 5 December 2003, para. 58; J. van den Boogaard, above note 19, p. 234 fn. 18.

49 J. van den Boogaard, above note 19, p. 240.

50 Ibid., p. 241; US Center for Army Lessons Learned, above note 33.

51 For example, it has been reported that the United States used algorithmically derived metadata analytics to target individuals in Pakistan and Afghanistan. See e.g. Vasja Badalič, “The Metadata-Driven Killing Apparatus: Big Data Analytics, the Target Selection Process, and the Threat to International Humanitarian Law”, Critical Military Studies, Vol. 9, No. 4, 2023, p. 625. Badalič also points out that “[t]he central hypothesis is that metadata analytics is not simply a neutral, objective tool for finding military targets, but rather a new ethico-political arrangement … that produces new ways for defining military targets that are inconsistent with the principle of distinction, the key principle of international humanitarian law” (p. 621). While this argument is primarily directed at the principle of distinction, the ways in which targets are defined can also indirectly shape proportionality assessments, insofar as the characterization of a person or object as a lawful target informs the subsequent weighing of anticipated harm against expected military advantage.

52 N. Renic and E. Schwarz, above note 9; Elke Schwarz, “Technology and Moral Vacuums in Just War Theorising”, Journal of International Political Theory, Vol. 14, No. 3, 2018.

53 T. Boutruche, above note 30, p. 20.

54 For an excellent analysis of the discussion regarding the merits of humans versus machines, see Klaudia Klonowska and Taylor K. Woodcock, “Rhetoric and Regulation: The (Limits of) Human/AI Comparison in Legal Debates on Military AI”, in Bérénice Boutin, Taylor K. Woodcock and Sadjad Soltanzadeh (eds), Decision at the Edge: Interdisciplinary Dilemmas in Military Artificial Intelligence, T. M. C. Asser Press, The Hague, 2025 (forthcoming).

55 Yilun Zhu et al., “Can Large Language Models Understand Context?”, in Association for Computational Linguistics, 18th Conference of the European Chapter of the Association for Computational Linguistics: Findings of EACL 2024, 2024. The authors’ experiments show that language models often struggle with subtle and nuanced aspects of language when context is involved. Interestingly, these results don’t always line up with how the same models perform on other tests that measure different kinds of language abilities, suggesting that current evaluations may overlook important weaknesses. The authors also compared full-sized models with smaller, compressed versions (which use fewer bits to save space and run faster), finding that compressing models to three-bit versions weakens their ability to understand context, though the impact varies depending on the type of task.

56 E.-C. Gillard, above note 14, p. 21; ICRC, The Principle of Proportionality in the Rules Governing the Conduct of Hostilities Under International Humanitarian Law, International Expert Meeting, Quebec, 2016, p. 52: “Nevertheless, determinations on excessiveness are carried out daily by militaries, and attacks are cancelled based on them. The difficulties in reaching such decisions should therefore not be seen as depriving the principle of proportionality of its value for the protection of civilians.”

57 J. van den Boogaard, above note 19, p. 263; T. K. Woodcock, above note 5.

58 Michael N. Schmitt, Jeffrey Biller, Sean Fahey, David Goddard and Chad Highfill, “Joint and Combined Targeting: Structure and Process”, in Jens David Ohlin (ed.), Weighing Lives in War, Oxford University Press, Oxford, 2017.

59 John J. Tirpak, “Find, Fix, Track, Target, Engage, Assess”, Air and Space Forces Magazine, 1 July 2000, available at: www.airandspaceforces.com/article/0700find/; Mike Benitez, “It’s About Time: The Pressing Need to Evolve the Kill Chain”, War on the Rocks, 17 May 2017, available at: https://warontherocks.com/2017/05/its-about-time-the-pressing-need-to-evolve-the-kill-chain/.

60 Brad Boyd, “There Are Some Things We Shouldn’t Do… Taking a Look at Gospel and What It Means to Let AI ‘Target’”, Killer Robot Cocktail Party, 19 February 2024, available at: https://killerrobotcocktailparty.substack.com/p/there-are-some-things-we-shouldnt. See also Yuval Abraham, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza”, +972 Magazine, 3 April 2024, available at: www.972mag.com/lavender-ai-israeli-army-gaza/; Elizabeth Dwoskin, “Israel Built an ‘AI Factory’ for War. It Unleashed It in Gaza”, Washington Post, 29 December 2024, available at: www.washingtonpost.com/technology/2024/12/29/ai-israel-war-gaza-idf/; Patrick Kingsley et al., “Israel Loosened Its Rules to Bomb Hamas Fighters, Killing Many More Civilians”, New York Times, 26 December 2024, available at: www.nytimes.com/2024/12/26/world/middleeast/israel-hamas-gaza-bombing.html; J. Dorsey and M. Bo, above note 2. In response to reported news of the use of AI in targeting, the Israel Defense Forces issued a statement seeking to clarify their use of AI in operational processes. See Israel Defense Forces, “The IDF’s Use of Data Technologies in Intelligence Processing”, 18 June 2024, available at: www.idf.il/210062.

61 M. Gunneflo and G. Noll, above note 5, p. 117.

62 For an in-depth treatment of shifts in certain decision-making architectures, see T. K. Woodcock, above note 6.

63 Shannon French and Lisa Lindsay, “Artificial Intelligence in Military Decision-Making: Avoiding Ethical and Strategic Perils with an Option-Generator Model”, in Bernard Koch and Richard Schoonhoven (eds), Emerging Military Technologies: Ethical and Legal Perspectives, Brill Nijhoff, Boston, MA, 2022; Inte Gloerich, “Reimagining the Truth: Machine Blockchain Imaginaries between the Rational and the More-than-Rational”, PhD thesis, Utrecht University, 3 February 2025, pp. 101–102, available at: https://research-portal.uu.nl/en/publications/reimagining-the-truth-machine-blockchain-imaginaries-between-the-/.

64 Suzanne Glickman, AI and Tech Industrial Policy: From Post-Cold War Post-Industrialism to Post-Neoliberal Re-Industrialization, AI Now Institute, New York, 12 March 2024.

65 E. Schwarz, above note 52, p. 289. Schwarz outlines that since the Vietnam War, a “sci-tech” lens, rooted in modelling, systems analysis and cybernetics, has shaped US approaches to war, privileging quantifiable data over experiential or contextual knowledge and reinforcing a scientific-technological framing that also underpins much revisionist just-war thinking.

66 I. Gloerich, above note 63.

67 Stathis Kalyvas and Matthew Kocher, “The Dynamics of Violence in Vietnam: An Analysis of the Hamlet Evaluation System (HES)”, Journal of Peace Research, Vol. 46, No. 3, 2009, p. 335.

68 Andrew Simpson, “Into the Unknown: The Need to Reframe the Risk Analysis”, Journal of Cybersecurity, Vol. 10, No. 1, 2024, p. 2.

69 T. K. Woodcock, above note 6.

70 Stephen J. Rockel and Rick Halpern (eds), Inventing Collateral Damage: Civilian Casualties, War, and Empire, Between the Lines, Toronto, 2009.

71 National Museum of the US Air Force, “USAF Target Designators and Precision Guided Munitions”, available at: www.nationalmuseum.af.mil/Visit/Museum-Exhibits/Online-Exhibits/Target-Designators/; Thomas Schelling, “Dispersal, Deterrence, and Damage”, Operations Research, Vol. 9, No. 3, 1961, p. 363; Jessica Whyte, “Calculated Indifference: The Politics of Collateral Damage”, Journal of Genocide Research, Vol. 21, No. 2, 2019.

72 Arthur Cebrowski and John Garstka, “Network-Centric Warfare: Its Origin and Future”, US Naval Institute Proceedings, Vol. 124, No. 1, 1998. But see Lucy Suchman, “Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense”, Social Studies of Science, Vol. 53, No. 5, 2022, pp. 768–769.

73 David Alberts, John Garstka and Frederick Stein, Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd ed., DoD C4ISR Cooperative Research Program, 1999, p. 1, available at: https://apps.dtic.mil/sti/tr/pdf/ADA406255.pdf.

74 John Luddy, The Challenge and Promise of Network-Centric Warfare, Lexington Institute Research Study, February 2005, p. 3, available at: www.lexingtoninstitute.org/wp-content/uploads/challenge-promise-network-centric-warfare.pdf.

75 Dustin A. Lewis, Gabriella Blum and Naz K. Modirzadeh, War-Algorithm Accountability, Harvard Law School Program on International Law and Armed Conflict Briefing, 31 August 2016, available at: https://pilac.law.harvard.edu/aws; Ryan J. Vogel, “Droning On: Controversy Surrounding Drone Warfare Is Not Really about Drones”, Brown Journal of World Affairs, Vol. 19, No. 2, 2013, pp. 111, 118; Benjamin Johnson, “Coded Conflict: Algorithmic and Drone Warfare in US Security Strategy”, Journal of Military and Strategic Studies, Vol. 18, No. 4, 2018, pp. 36–37; Jessica Dorsey, “Doubling Down on Distance: Rethinking Civilian Protections in the Era of Military Drones and Algorithmic Warfare”, in Jai Galliot (ed.), Handbook on Remote Warfare, Oxford University Press, Oxford, 2026 (forthcoming).

76 Michael Walzer, “Just and Unjust Targeted Killing and Drone Warfare”, Dædalus, Vol. 145, No. 4, 2016, p. 12; Nils Melzer, “Targeted Killings: Contemporary Challenges, Risks and Opportunities”, Journal of Conflict and Security Law, Vol. 18, No. 2, 2013, p. 259; Ruxandra Vlad and John Hardy, “Signature Strikes and the Ethics of Targeted Killing”, International Journal of Intelligence and CounterIntelligence, Vol. 38, No. 4, 2025, p. 1.

77 David Cole, “‘We Kill People Based on Metadata’”, New York Review of Books, 10 May 2014, available at: www.nybooks.com/online/2014/05/10/we-kill-people-based-metadata/; L. Suchman, above note 72.

78 E. Schwarz, above note 52, pp. 288–289.

79 US Chairman of the Joint Chiefs of Staff, No-Strike and the Collateral Damage Estimation Methodology, Instruction 3160.01B, 12 October 2012, available at: https://info.publicintelligence.net/CJCS-CollateralDamage.pdf.

80 J. Whyte, above note 71; M. Gunneflo and G. Noll, above note 5, p. 114; John R. Emery, “Probabilities towards Death: Bugsplat, Algorithmic Assassinations, and Ethical Due Care”, Critical Military Studies, Vol. 8, No. 2, 2020.

81 Gregory S. McNeal, “Targeted Killing and Accountability”, Georgetown Law Journal, Vol. 102, 2014, p. 751.

82 DoD, Office of the Director, Operational Test and Evaluation, Joint Technical Coordinating Group for Munitions Effectiveness (JTCG/ME) Program, 2024, p. 419, available at: www.dote.osd.mil/Portals/97/pub/reports/FY2024/dotemanaged/2024jtcg-me.pdf.

83 J. Dorsey and M. Bo, above note 2, p. 140. See also US Chairman of the Joint Chiefs of Staff, above note 79, p. D-6: “The CDM does not account for secondary explosions. Collateral damage due to secondary explosions (i.e., weapons cache or fuel tanks for military equipment) cannot be consistently measured or predicted. Commanders should remain cognizant of any additional risk due to secondary explosions.”

84 N. Renic and E. Schwarz, above note 9; Brian Smith, “Civilian Casualty Mitigation and the Rationalization of Killing”, Journal of Military Ethics, Vol. 20, No. 1, 2021; L. Suchman, above note 72; Bruce Cronin, Bugsplat: The Politics of Collateral Damage in Western Armed Conflicts, Oxford University Press, New York, 2018, p. 11; J. R. Emery, above note 80, p. 10.

85 E. Schwarz, above note 53, p. 289; Neta Crawford, “Bugsplat: US Standing Rules of Engagement, International Humanitarian Law, Military Necessity, and Noncombatant Immunity”, in Anthony Lang, Cian O’Driscoll and John Williams (eds), Just War: Authority, Tradition, and Practice, Georgetown University Press, Washington, DC, 2013.

86 Frank Wolfe, “Pentagon Removed Non-Combatant Casualty Cut-Off Value from Doctrine in 2018”, Defense Daily, 11 June 2021, available at: www.defensedaily.com/pentagon-removed-non-combatant-casualty-cut-off-value-doctrine-2018/pentagon/. The United States was the most prominent user of this particular metric as evidenced by stated policy, but it was not alone. Delori outlines how NCVs became what he termed “a central value in contemporary Western war”: see Mathias Delori, “The Politics of Emotions in Contemporary Wars”, in Stephen Roach (ed.), Critical International Relations, Edward Elgar, Cheltenham, 2020. Despite the fact that precise details about NCVs are not in the public domain, their use has been documented or reported on by the United Kingdom, Israel and other Western allies of the United States: see e.g. Lydia Day, Frank Ledwidge, Stuart Casey-Maslen and Mark Goodwin-Hudson, Avoiding Civilian Harm in Partnered Military Operations: The UK’s Responsibility, Ceasefire Centre for Civilian Rights, April 2023, p. 44.

87 J. van den Boogaard, above note 19, p. 216; Gregory S. McNeal, The U.S. Practice of Collateral Damage Estimation and Mitigation, Pepperdine Working Paper No. 2, Pepperdine University School of Law, 2011.

88 G. S. Corn, above note 26.

89 J. Dorsey and L. Moffett, above note 17. On the notions of anchoring and anchoring bias, see the below section on “Potential Biases Introduced by AI-DSS”.

90 Scott Graham, “The Non-Combatant Casualty Cut-Off Value: Assessment of a Novel Targeting Technique in Operation Inherent Resolve”, International Criminal Law Review, Vol. 18, No. 4, 2018, p. 680.

91 Anna Nadibaidze, Ingvild Bode and Qiaochu Zhang, AI in Military Decision Support Systems: A Review of Developments and Debates, Center for War Studies, University of Southern Denmark, Odense, 2024.

92 Anna Rosalie Greipl, “Artificial Intelligence for Better Protection of Civilians during Urban Warfare”, Articles of War, 26 March 2024, available at: https://lieber.westpoint.edu/artificial-intelligence-better-protection-civilians-urban-warfare/.

93 L. Lewis et al., above note 18.

94 Mohammad Yazdi, Esmaeil Zarei, Sidum Adumene and Amin Beheshti, “Navigating the Power of Artificial Intelligence in Risk Management: A Comparative Analysis”, Safety, Vol. 2, No. 10, 2024.

95 Avi Goldfarb and Jon R. Lindsay, “Prediction and Judgement: Why Artificial Intelligence Increases the Importance of Humans in War”, International Security, Vol. 46, No. 3, 2022.

96 H. Meerveld et al., above note 4.

97 E. Dwoskin, above note 60. See also Yossi Sariel, Human Machine Team: How to Create Synergy between Human and Artificial Intelligence that Will Revolutionize Our World, eBookPro Publishing, 2021, p. 44.

98 M. Gunneflo and G. Noll, above note 5, p. 111.

99 T. K. Woodcock, above note 6.

100 Corey Dickstein, “Creating a ‘Kill Web’: Army Brings Other Services, Allies Together to Test New Tech for a Major Fight”, American Legion, 28 March 2024, available at: www.legion.org/information-center/news/newsletters/2024/march/creating-a-kill-web-army-brings-other-services-allies-together-to-test-new-tech-for-a-major-fight.

101 Zena Assaad and Emily Williams, Technology and Tactics: The Intersection of Safety, AI, and the Resort to Force”, Cambridge Forum on AI: Law and Governance, 2026 (forthcoming).

102 Phillip Pratzner, “The Current Targeting Process”, in Paul Ducheine, Michael N. Schmitt and Frans Osinga (eds), Targeting: The Challenges of Modern Warfare, T. M. C. Asser Press, The Hague, 2016, p. 82; T. K. Woodcock, above note 5.

103 Ingvild Bode, Human-Machine Interaction and Human Agency in the Military Domain, Centre for International Governance Innovation Policy Brief No. 193, 2025.

104 Elke Schwarz, “Autonomous Weapons Systems, Artificial Intelligence, and the Problem of Meaningful Human Control”, Philosophical Journal of Conflict and Violence, Vol. 5, No. 1, 2021; T. K. Woodcock, above note 6.

105 Philip Johnson-Laird, “Mental Models and Human Reasoning”, Proceedings of the National Academy of Sciences, Vol. 107, No. 43, 2010, p. 18243: “reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences”.

106 Z. Assaad and E. Williams, above note 101; T. K. Woodcock, above note 6.

107 J. van den Boogaard, above note 19, pp. 245–246.

108 Z. Assaad and E. Williams, above note 101.

109 Elke Schwarz, “Silicon Valley Goes to War: Artificial Intelligence, Weapons Systems, and Moral Agency”, Philosophy Today, Vol. 65, No. 3, 2021, p. 550.

110 Stefan Buijsman, Sarah Carter and Juan Pablo Bermúdez, “Autonomy by Design: Preserving Human Autonomy in AI Decision-Support”, Philosophy and Technology, 2006 (forthcoming), available at: https://arxiv.org/pdf/2506.23952.

111 Schwarz discusses this in the lethal autonomous weapons systems (LAWS) context, noting that “in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency” – an argument that is equally applicable with respect to AI-DSS. E. Schwarz, above note 109, p. 54; se also I. Bode, above note 103.

112 Y. Abraham, above note 60.

113 Rolf Roth, “The Rational Analytical Approach to Decision-Making: An Adequate Strategy for Military Commanders?”, Connections, Vol. 2, No. 3, 2004; J. van den Boogaard, above note 19, p. 240.

114 John Tramazzo, “Applying Vague Law to Violence: How the Joint Force Can Master Proportionality before a High-Intensity War”, Army Lawyer, Vol. 3, 2024.

115 ICRC, above note 34. See also J. Dorsey and M. Bo, above note 2, p. 157, with respect to this risk regarding the obligation to take precautions in attack.

116 J. Dorsey and M. Bo, above note 2, p. 157; J. Dorsey and L. Moffett, above note 17; T. K. Woodcock, above note 5; Mitt Regan, “Military AI as Sociotechnical Systems”, Articles of War, 10 June 2025, available at: https://lieber.westpoint.edu/military-ai-sociotechnical-systems/; Olya Kudina and Ibo van de Poel, “A Sociotechnical System Perspective on AI”, Minds and Machines, Vol. 34, No. 21, 2024; “Sociotechnical Approach to Design”, Lawful By Design Live Podcast (featuring Zena Assaad and Nehal Bhuta), YouTube, 14 August 2025, available at: www.youtube.com/watch?v=mB6To1S5hd0.

117 Amnesty International, Israel and Occupied Palestinian Territories: Automated Apartheid: How Facial Recognition Fragments, Segregates and Controls Palestinians in the OPT, 2 May 2023; E. Schwarz, above note 52.

118 B. Boyd, above note 60.

119 ICRC, above note 34.

120 Yahli Shereshevsky and Yuval Shany, “Programmed to Obey: The Limits of Law and the Debate over Meaningful Human Control of Autonomous Weapons”, Columbia Human Rights Law Review, Vol. 57, 2025.

121 Z. Assaad and E. Williams, above note 101.

122 Anna Rosalie Greipl, “Artificial Intelligence Systems and Humans in Military Decision-Making: Not Better or Worse but Better Together”, Articles of War, 14 June 2024, available at: https://lieber.westpoint.edu/artificial-intelligence-systems-humans-military-decision-making-better-worse/.

123 L. Lewis et al., above note 18; J. Dorsey and M. Bo, above note 2.

124 Ingvild Bode and Ishmael Bhila, “The Problem of Algorithmic Bias in AI-Based Military Decision Support Systems”, Humanitarian Law and Policy Blog, 3 September 2024, available at: https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/; Arthur Holland Michel, Decisions, Decisions, Decisions: Computation and Artificial Intelligence in Military Decision-Making, ICRC, Geneva, 2024, available at: https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html.

125 See M. Gunneflo and G. Noll, above note 5, pp. 105–106; see also G. Noll, “War and Algorithm: The End of Law?’, in Max Liljefors, Gregor Noll and Daniel Steuer (eds), War and Algorithm, Rowland and Littlefield, London, 2019, pp. 94–98.

126 T. K. Woodcock, above note 6.

127 Xiao Hu, Liang Luo and Ian Fleming, “A Role for Metamemory in Cognitive Offloading”, Cognition, Vol. 193, 2019.

128 T. K. Woodcock, above note 6; J. Dorsey and M. Bo, above note 2; J. Dorsey and L. Moffett, above note 17.

129 Damian Copeland, A Functional Approach to the Legal Review of Autonomous Weapon Systems, Brill Nijhoff, Leiden, 2024; Article 36, “Lawful by Design Initiative”, available at: www.article36legal.com/lawful-by-design; Klaudia Klonowska, “Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare”, Yearbook of International Humanitarian Law, Vol. 23, 2020; Asia-Pacific Institute for Law and Security, Legal Review of Weapons Information Portal, available at: https://apils.org/legal-review/; Global Commission on Responsible AI in the Military Domain (GC REAIM), Responsible by Design: Strategic Guidance Report on the Risks, Opportunities, and Governance of Artificial Intelligence in the Military Domain, September 2025, available at: https://hcss.nl/wp-content/uploads/2025/09/GC-REAIM-Strategic-Guidance-Report-Final-WEB.pdf. (Disclosure: the author is affiliated as an Ambassador to the Lawful by Design Initiative and an Expert Member of GC REAIM.)

130 Nema Milaninia, “Biases in Machine Learning Models and Big Data Analytics: The International Criminal and Humanitarian Law Implications”, International Review of the Red Cross, Vol. 102, No. 913, 2020, p. 215; see also J. Dorsey and M. Bo, above note 2, p. 167.

131 Michael Horowitz and Lauren Kahn, “Bending the Automation Bias Curve: A Study of Human and AI-Based Decision Making in National Security Contexts”, International Studies Quarterly, Vol. 68, No. 2, 2024. See also Alexander Blanchard and Laura Bruun, Bias in Military Artificial Intelligence, Stockholm International Peace Research Institute, December 2024; Anthony Downey, “The Alibi of AI: Algorithmic Models of Automated Killing”, Digital War, Vol. 6, No. 9, 2025, pp. 12, 15.

132 J. Dorsey and M. Bo, above note 2; T. K. Woodcock, above note 6.

133 A. Downey, above note 131.

134 Li Shifei, Wu Chenpeng and Ouyang Yan, Liu Zhen, “Application of the Anchoring Effect in Military Decision Making”, Science Innovation, Vol. 10, No. 3, 2022, pp. 90–95; Dan P. Ly, Paul G. Shekelle and Zirui Song, “Evidence for Anchoring Bias during Physician Decision-Making”, JAMA Internal Medicine, Vol. 183, No. 8, 2023. See also GC REAIM, above note 129, p. 33.

135 S. Buijsman, S. Carter and J. P. Bermúdez, above note 110. The authors highlight one stark warning regarding the knock-on effects of deskilling not being limited to skills that an actor already possesses: “Domain-specific deskilling takes place not only as the gradual loss of already-acquired skills: it also manifests in a reduced ability to acquire new ones” (p. 6). They further point out that sustained AI assistance can, over time, erode a human decision-maker’s domain-specific competence by weakening both cognitive abilities (their capacity to independently acquire and use decision-relevant information) and metacognitive processes (their confidence in making sound decisions without AI support). As these effects compound, humans may become increasingly prone to over-relying on AI and less willing to reject its recommendations even when they should be overridden.

136 Krzysztof Budzyń et al., “Endoscopist Deskilling Risk after Exposure to Artificial Intelligence in Colonoscopy: A Multicentre, Observational Study”, Lancet Gastroenterology and Hepatology, Vol. 10, No. 10, 2025.

137 As Ahmad writes in a comment on the study done by Budzyń et al. (above note 136), “[i]nterestingly, this study included highly experienced endoscopists, suggesting they are not immune to the behaviour-modifying effects associated with AI use. It is possible that trainees or novices exposed to AI devices at inception of training could be even more vulnerable to deskilling.” Omer F. Ahmad, “Endoscopist Deskilling: An Unintended Consequence of AI-Assisted Colonoscopy?”, Lancet Gastroenterology and Hepatology, Vol. 10, No. 10, 2025. This is relevant across expertise in other knowledge domains, including militaries.

138 Prakash Shukla et al., “De-skilling, Cognitive Offloading, and Misplaced Responsibilities: Potential Ironies of AI-Assisted Design”, in Association for Computing Machinery, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ‘25), New York, 2005.

139 J. Dorsey and M. Bo, above note 2. On the need for and importance of technical training, see Michael Horowitz and Lauren Kahn, “The AI Literacy Gap Hobbling American Officialdom”, War on the Rocks, 14 January 2020, available at: https://warontherocks.com/2020/01/the-ai-literacy-gap-hobbling-american-officialdom/; G. S. Corn, above note 26; E.-C. Gillard, above note 14.

140 M. Bo, I. Bode, J. Dorsey and E. Schwarz, above note 1; J. Dorsey and M. Bo, above note 2; Alexander Blanchard and Laura Bruun, Autonomous Weapon Systems and AI-enabled Decision Support Systems in Military Targeting: A Comparison and Recommended Policy Responses, Stockholm International Peace Research Institute, June 2025.

141 ”Red teaming, originally developed in military strategy, uses dedicated teams to challenge an organization’s defences. Applied to AI, it goes beyond traditional testing by simulating real-world adversarial scenarios to assess how systems perform under pressure.” Fergal Glynn, “What is AI Red Teaming?”, 8 October 2025, available at: https://mindgard.ai/blog/what-is-ai-red-teaming.

Figure 0

Figure 1. Targeting matrix, denoting the position of proportionality assessments at the bottom of the chart only after all feasible precautions have been taken. Source: Geoffrey S. Corn et al., The Law of Armed Conflict: An Operational Approach, 3rd ed., Wolters Kluwer, New York, 2006 (forthcoming).

Figure 1

Figure 2. Joint Targeting Cycle. Source: US Department of Defense (DoD), Joint Targeting, Joint Publication 3-60, 28 September 2018, Fig. II-2, available at: www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Joint_Staff/21-F-0520_JP_3-60_9-28-2018.pdf.