Introduction
Within militaries, the rapid adoption of artificial intelligence (AI)-enabled decision support systems (AI-DSS) – tools that use AI techniques to gather and analyze data, provide insight into the operational environment and offer actionable recommendationsFootnote 1 – has begun reshaping decision-making, particularly within targeting operations, such as within the Joint Targeting Cycle (JTC) and similar procedures.Footnote 2 AI-DSS are intended to assist military decision-makers in evaluating factors relevant to legal compliance, including the duty to take precautions and the requirement to ensure proportionality in attacks,Footnote 3 and proponents argue that these systems can enhance efficiency by accelerating the observe–orient–decide–act loop.Footnote 4 The present author acknowledges that such potential benefits may exist in the way certain systems are designed or developed; however, this article focuses instead on the realities that these tools reveal in their use and practice. It examines the risks introduced by their underlying computational quantification logicsFootnote 5 (defined here as the translation of complex legal and ethical judgements into numerical or statistical models), particularly in relation to human cognition, accountability, and adherence to international humanitarian law (IHL).
Tensions exist between these logics and what Woodcock refers to as “choice architectures”, or the way choices are presented that influence how humans make decisions through using AI-DSS, which carry the potential to undermine the human judgement required for IHL determinations more broadly, including proportionality assessments.Footnote 6 This article offers a novel contribution to the literature by tracing the longer trajectory through which datafication, or the turn toward reliance on quantification metrics and algorithmic modalities, has come to shape military decision-making architectures, with a particular focus on proportionality assessments within the JTC. By situating AI-DSS within this broader historical and epistemic shift, the article argues that their increasing integration risks reinforcing existing quantification logics, encouraging decision-makers to translate complex elements of proportionality analysis into computational terms. This, in turn, threatens to sideline the judgement, ethical deliberation and legal reasoningFootnote 7 necessary for context-appropriate human judgement and control as well as the preservation of human responsibility and accountability.Footnote 8
Central to this argument is an analysis of how use of AI-DSS can influence a commander’s ability to reasonably or responsibly assess proportionality. If AI-DSS increasingly guide or dictate (parts of) these assessments, the human capacity for contextual judgement and reasoning may diminish through various cognitive biases and shifts, leading to decisions that may be algorithmically justified but legally non-compliant. At the speed and scale introduced by AI-DSS, this could also normalize or even increase civilian harm.Footnote 9
Technological advancements in warfare, particularly those related to datafication, automation and AI, have fuelled the belief that armed conflict can be made more precise, efficient and ethical through quantifiable metrics and algorithmic decision-making.Footnote 10 This belief reveals an embedded quantification fallacy: the mistaken assumption that the moral and legal complexities of the proverbial Clausewitzean “fog of war” can be distilled down into measurable inputs and outputs suitable for rapid computation and automated solutions.Footnote 11 Falling victim to this fallacy is particularly consequential in assessing proportionality in IHL, a rule that prohibits attacks expected to cause incidental civilian harm that would be excessive in relation to their anticipated concrete and direct military advantage.Footnote 12
This article posits that the increasing reliance on quantitative tools like AI-DSS within targeting operations risks displacing the contextual qualitative human judgement that is essential to assessing proportionality. These assessments are not reducible to a technical or mathematical formula but rather reflect a normative standard that demands subjective, context-sensitive judgement from a reasonable military commander acting in good faith. By tracing the historical trajectory of this quantification impulse from the Vietnam War to modern-day conflicts, this article demonstrates how the pursuit of precision and speed in warfare through datafied and algorithmic means risks shifting cognitive and normative frameworks, distorting legal and moral reasoning, displacing human judgement, and introducing new cognitive and accountability risks that stakeholders must be aware of and work to mitigate.Footnote 13
The article is structured in four parts. The first part situates proportionality within IHL, emphasizing the need for a comprehensive understanding of civilian harm and what this article terms the primacy of the duty of constant care and precaution, and framing proportionality solely as a final legal safeguard. It also outlines targeting operations illustrated through the lens of the JTC and locates proportionality decision-making within it. The second part traces the evolution of “quantification logics” in targeting, from the Vietnam War to the modern era and the growing integration of AI-DSS within the JTC. The third part examines the cognitive impacts that these logics, systems and approaches may have on proportionality assessments. The fourth part concludes with concrete recommendations to reaffirm the indispensability of human reasoning in proportionality assessments under IHL.
Situating proportionality assessments within IHL
The legal framework of IHL is designed to balance military necessity with humanitarian considerations that reduce human suffering during armed conflicts and protect civilians from the impact and effects of hostilities.Footnote 14 It does so through articulations of rules related to distinction and proportionality, which operate within the broader framework of the overarching duty of constant care “to spare the civilian population, civilians and civilian objects” throughout the duration of hostilities and requirements to take precautions in attack in order to avoid or minimize harm to civilians.Footnote 15 Given growing evidence that AI-DSS can reshape cognitive processes in the JTC, and recognizing that commanders must make highly consequential decisions that balance expected military advantage against potential harm to civilians and civilian objects, this article examines how the IHL rule of proportionality is operationalized within the JTC and how the quantification logics of AI-DSS may influence that operational thinking.
Codified as a prohibition on indiscriminate attacks against the civilian population in Article 51(5)(b) of Additional Protocol I (AP I) and as a set of obligations related to taking precautions in attack in Article 57(2)(a)(iii) of that same instrument, the rule of proportionality requires that “those who plan or decide upon an attack” must take measures to ensure that they do not carry out attacks “which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated”.Footnote 16
Toward a comprehensive understanding of civilian harm
As outlined above, the protection of civilians and civilian objects from the effects of hostilities lies at the centre of IHL principles and military doctrine related to distinction, precaution and proportionality.Footnote 17 Meeting these obligations throughout the JTC requires military decision-makers to understand not only what constitutes civilian harm but also its main causes.Footnote 18 In terms of legal elements of proportionality assessments, Van den Boogaard discusses the notion of “excessive” in his treatise on the subject.Footnote 19 More recently, Daniele and Sari have zoomed in on a debate about the meaning of “incidental”.Footnote 20 Gillard is one of very few examples in the literature or military manuals examining the substance of the incidental side of the assessment.Footnote 21 Her paper places those discussions within a broader, less explored understanding of this aspect of assessment, one that encompasses “loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof”, commonly referred to as civilian harm. This approach has gained operational traction among several militaries in recent years. Drawing on documented cases, civil society organizations and scholars have also urged a broader, more nuanced interpretation of this clause that extends beyond immediate casualties and understands “injury to civilians” to encompass long-term, indirect and systemic effects.Footnote 22 Militaries, too, have begun to recognize the strategic, operational and tactical value of this wider approach, seeing it as integral to the legitimacy of their operations.Footnote 23 These shifts are shaping doctrine on civilian harm mitigation and response (CHMR), a key element of civilian protection in armed conflict that applies in both asymmetric warfare, where civilians often live in or near conflict zones, and large-scale combat operations conducted in proximity to more dispersed civilian populations.Footnote 24 CHMR involves actions to “prevent, deter, pre-empt, and respond to situations where civilians are targets of violence or under threat of violence”.Footnote 25 Beyond being a legal, strategic and ethical imperative, CHMR strengthens operational legitimacy and can improve operational outcomes.Footnote 26 Many military professionals now argue that embedding CHMR into operations enhances rather than hinders mission effectiveness, with some describing it as a “fourth strategic offset” in modern warfare.Footnote 27
This article links CHMR approaches with the legal obligations of proportionality assessments in the JTC, offering an integrated perspective on both. Central to the CHMR shift, and adopted here, is a broader, more nuanced view of civilian harm and what this means in terms of legal understandings. Fully understanding civilian harm, both its forms and causes, is essential to complying with the duty of constant careFootnote 28 and applying the rules of precautions in attack and proportionality in the JTC.
Historically, targeting processes have traditionally focused on analyzing effects on adversaries, with far fewer resources dedicated to assessing effects on civilians, civilian objects and the civilian environment.Footnote 29 While militaries are beginning to acknowledge the policy value of better understanding the civilian environment, this article argues that targeting law demands this as well – for example, aligning ex ante civilian harm estimates (created from collateral damage estimation methods or a combination of intelligence sources) with ex post assessments, including battle damage assessments and after-action reports. This iterative process allows lessons from previous strikes to inform future decision-making. Harmonizing the metrics used in pre- and post-strike analysis so that the same elements of civilian harm are measured before and after an attack would significantly strengthen compliance with IHL. Indeed, the precautionary principle arguably requires such an approach.Footnote 30 This perspective is set against the backdrop of increasing use of AI-DSS, where the extension of quantification logics increasingly shapes how civilian harm is understood and potentially programmed into certain AI-DSS and, therefore, how proportionality is evaluated and operationalized in targeting decisions.
The primacy of the duty of constant care and taking precautions in attack
Building on this discussion of civilian harm, the next question is how to avoid or in any case minimize it in armed conflict. This article adopts and extends a view, shared by certain scholars and practitioners, of assessing proportionality as a final legal safeguard, one that becomes relevant for consideration only after the duty of constant care has been complied with across the conduct of hostilities, all feasible precautions in attack have been taken, and some incidental harm remains unavoidable.Footnote 31 A complementary operational perspective confirms that only after precautions have been taken can it become clear what kind of civilian harm may occur, which can then be fed into any subsequent proportionality assessment.Footnote 32 To be clear, precautions must be continually reassessed as circumstances evolve (also within proportionality assessments), so proportionality as one of the core co-equal principles alongside distinction and precaution is never a singular, isolated step. Rather, in terms of the functionality of the proportionality rule outlined in Article 51(5)(b) of AP I, this article advances the argument that a proportionality assessment becomes relevant not as a first consideration in targeting but rather only once it is evident that, despite all feasible precautions having been taken, some incidental civilian harm is still expected to occur. Corn offers a useful visual illustrating how these concepts interact in targeting decisions, as shown in Figure 1.

Figure 1. Targeting matrix, denoting the position of proportionality assessments at the bottom of the chart only after all feasible precautions have been taken. Source: Geoffrey S. Corn et al., The Law of Armed Conflict: An Operational Approach, 3rd ed., Wolters Kluwer, New York, 2006 (forthcoming).
As Figure 1 indicates, targeting is iterative and non-linear. Once targeting procedures begin, the law requires precautions to be taken first and often. However, certain interpretations of the law, shaped by the quantification logics embedded into military decision-making processes (MDMPs)Footnote 33 over the past twenty-five years, have had the (perhaps unintentional) effect ofcentral evaluative standard, often at the expense of the separate and prior obligation to take precautions in attack as well as the overarching duty of constant care to spare civilians from the effects of conflict.Footnote 34 As this author and Marta Bo have argued elsewhere, this risks contributing to a normative shift in interpreting legal obligations and the erosion of core protective mechanisms embedded in IHL and international law more broadly, diluting the emphasis on positive obligations to take measures to mitigate civilian harm in favour of manufacturing space for proactive justifications for its occurrence.Footnote 35 By leveraging quantitative modalities and the increased speed and scale they provide, integrating AI-DSS into the JTC may further exacerbate this erosion.Footnote 36
Although proportionality assessments have been described as Footnote equations 37 or calculations,Footnote 38 they are not mathematical problems to be solved.Footnote 39 Within various phases of the JTC, outlined below, military decision-makers are required to undertake complicated analyses based on vast amounts of intelligence and information in order to assess and avoid or in any case minimize the potential for civilian harm before approving or effectuating strikes. In so doing, they must continuously qualitatively balance the military necessity of strikes against humanitarian concerns, take all feasible precautions to avoid or at least minimize civilian harm, and make ethical and legal judgements about what constitutes “excessive” harm in real time, often under intense pressure and time constraints.Footnote 40 This article’s premise aligns with Wright’s conclusion that there is “no bright-line rule” for determining what constitutes excessive civilian harm in relation to anticipated military advantage.Footnote 41
Moreover, pure “objectivity” in these assessments is not required, possible, or necessarily desirable. The International Committee of the Red Cross (ICRC) Commentary on the Additional Protocols explains that the proportionality test is inherently subjective, granting commanders a “fairly broad margin of judgement”, and emphasizes that it should be guided primarily by “common sense and good faith”.Footnote 42 As Cohen and Zlotogorski observe, proportionality is so context-dependent that no uniform standard can be applied; each situation must be assessed individually.Footnote 43 This means that, in practice, a military decision-maker must carry out a case-by-case, context-sensitive evaluation. Such assessments rely on legal reasoning, ethical judgement and situational awareness.Footnote 44 Years of experience and levels of IHL training play a significant role in leading to reasonableness of decision-making, as does access to well-trained legal advisers.Footnote 45
The standard of the “reasonable military commander” reflects the way in which the broader legal concept of reasonableness is enshrined in IHL.Footnote 46 Henderson and Reece offer a deeper examination of the historical and judicial development of the term, according to which commanders, drawing on their experience and training, make proportionality judgements based on the information reasonably available to them at the time.Footnote 47 However, this subjectivity is not without limits; it must align with what a hypothetical reasonable commander would have concluded under similar circumstances, grounding the assessment in both legal norms and practical military realities. This leads Henderson and Reece to conclude that the reasonable military commander standard reflects an “objective but qualified” approach.Footnote 48 The standard acknowledges the complexities and uncertainties inherent in real-time decision-making during armed conflict, embedding a degree of subjectivity that reflects the commander’s training, situational awareness, experience, rationality, honesty and good-faith intent in interpreting ways to avoid or in an any case minimize harm to civilians while allowing for effective achievement of concrete and direct military advantage as a result of targeting decisions.Footnote 49 Other factors, including checklists or auxiliary means of intelligence setting out requirements of information necessary to make decisions within the MDMP, can help to ensure a higher degree of objectivity in reaching the standard of reasonableness for military commanders, but they cannot be a substitute for context-appropriate human judgement and control.Footnote 50
In other words, once it must be assessed, proportionality must be evaluated through a comprehensive harm lens – one that accounts not only for civilian fatalities but also for “injury to civilians”, including mental harm, displacement, loss of infrastructure and livelihood, and damage to cultural or environmental assets. These considerations are not reducible to mere data points,Footnote 51 and proportionality cannot be determined through fixed arithmetic or algorithmic values.Footnote 52 To operationalize this understanding, military training on the application of proportionality must emphasize analogical reasoning (encapsulated in the notion of the “reasonable military commander”) over rigid computational thresholds. Rather than quantifying civilian harm through abstract metrics, training modules should be built on comparison-based exercises drawn from past cases that emphasize the integrated approach outlined above: setting ex ante civilian harm expectations against actual ex post effects, informed by experiential knowledge and moral reasoning.Footnote 53 All of these are areas in which human judgement excels. Some might argue that large language models (LLMs), with their strong pattern recognition abilities, could perform these tasks better than humans,Footnote 54 but studies show that these models often miss subtle contextual meanings and that their performance declines further when they are compressed or made more efficient.Footnote 55 This suggests that context-sensitive human judgement and control cannot technically be replaced by LLMs or AI-enabled systems. Moreover, even using them merely to assist humans carries risks, some of which are outlined below. Militaries working to integrate these systems within targeting operations should be aware of these risks and work to eliminate or minimize them; specifically in this instance, introducing LLMs to targeting cycles could create situations of over-reliance, leading to cognitive shifting, where human judgement deteriorates due to excessive trust in AI outputs. This too is discussed in more detail below.
While proportionality assessments are challenging, particularly for commanders under pressure, this difficulty reflects the high-stakes nature of the decisions involved. As Gillard observes, setting precise qualitative parameters may be difficult, but the determination is not impossible in practice.Footnote 56 Proportionality must be understood as a case-by-case, context-specific balancing exercise, not a fixed metric reducible to algorithmic modelling.Footnote 57 In the context of targeting operations, examined in the next section, this lens helps to reveal how such balancing is to be operationalized when commanders must weigh proportionality.
The JTC and proportionality assessments
Targeting procedures provide a structured methodology through which military forces identify, analyze and engage targets while complying with operational, legal and ethical obligations.Footnote 58 While the precise formulations vary, many States and alliances use comparable targeting models. For example, both the United States and the North Atlantic Treaty Organization (NATO) employ a JTC consisting of six non-linear phases, and while the terminology and sequencing occasionally differ (NATO’s phase 1, for instance, explicitly incorporates the commander’s intent, objectives and targeting guidance), the underlying logic and functions of each step largely overlap. Likewise, other conceptual frameworks (such as “find, fix, track, target, engage, and assess”Footnote 59) capture much of the same process. Because the US JTC is publicly articulated, doctrinally detailed and broadly mirrored in NATO doctrine and practice, this section uses it as a representative model to illuminate where and how key decisions, especially those related to proportionality assessments, are made within targeting procedures. That cycle generally includes the following six phases (see Figure 2):
1. End-state and commander’s objectives: Defining strategic military goals and desired outcomes.
2. Target development and prioritization: Identifying, verifying/validating and prioritizing targets based on intelligence and mission goals.
3. Capabilities analysis: Assessing the available strike options and their effectiveness.
4. Force assignment: Allocating specific military assets (e.g., air strikes, artillery, cyber operations) to engage the target.
5. Mission execution: Carrying out the targeting operation while ensuring compliance with relevant laws and the rules of engagement.
6. Assessment: Evaluating the effectiveness of the operation and adjusting for future operations if necessary.

Figure 2. Joint Targeting Cycle. Source: US Department of Defense (DoD), Joint Targeting, Joint Publication 3-60, 28 September 2018, Fig. II-2, available at: www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Joint_Staff/21-F-0520_JP_3-60_9-28-2018.pdf.
AI-DSS are reportedly increasingly being integrated and used at multiple phases of this JTC, particularly in target development and prioritization (phase 2), capabilities analysis (phase 3) and mission execution (phase 5).Footnote 60 However, use of these systems raises concerns about whether human decision-makers can retain cognitive autonomy over targeting decisions when AI-DSS have been integrated, specifically within phase 4, or whether they will become overly reliant on algorithmic outputs because of the quantification logics embedded within and extended by these systems.
The evolution of quantification logics within targeting
The use of AI-DSS marks the latest step in a broader trend in military decision-making toward quantification and data-driven reasoning, especially in relation to how proportionality is assessed. As Gunneflo and Noll argue, this shift is underpinned by a cost-benefit rationality that frames targeting decisions as optimization problems, where the objective is to minimize costs (civilian harm) while maximizing benefits (military advantage).Footnote 61 Rather than viewing proportionality as a subjective, context-sensitive legal judgement, this approach seeks to render it computationally manageable, thereby reinforcing the flawed belief that ethical and legal dilemmas can be resolved through quantifiable metrics.Footnote 62 Though military AI-DSS have only recently gained public attention with their use in Ukraine and Gaza, the elevation of computational models in MDMPs has clear historical precedents. The use of such quantification logics on the battlefield can be traced over several decades and is rooted in the belief that data-driven insights can improve or even perfect decision-making by reducing or eliminating human error and increasing speed and efficiency.Footnote 63 That is to say, AI-DSS technologies did not introduce this logic; rather, they reinforce and expand it, operating at speed and scale within a framework shaped by pre-existing policies, tools and practices. The next section briefly traces the origins and continuations of this framework. This is crucial for understanding how today’s AI-DSS operate within, rather than outside of, entrenched military logics of quantification and optimization. By situating these systems within their broader lineage, we can better identify the assumptions they perpetuate and the constraints they impose on legal and ethical judgement.
Vietnam and Cold War era: The “sci-tech” lens and the McNamara fallacy
The roots of quantification in warfare can be traced back to the Vietnam and Cold War era, a pivotal period in the rise of computational logics.Footnote 64 Emerging data-processing technologies employed by the US military began to shape how complex social and military realities were perceived and acted upon. Advances in systems analysis, statistical modelling and computer-assisted decision-making converged with military strategy, embedding numerical metrics at the heart of operational planning. Schwarz describes this as viewing operations through the “scientific-technological lens”, a method that prioritizes and privileges the quantifiable aspects of war.Footnote 65 This shift reflects a broader faith in data, where the perceived objectivity of quantification overshadows the complex qualitative dimensions of human conflict.Footnote 66
Specifically, the Vietnam War marked a key shift for the United States toward quantification in military decision-making. The introduction of the Hamlet Evaluation System (HES), a data-driven tool measuring pacification progress through metrics like enemy presence and government control, is one example of this.Footnote 67 Though statistically innovative, the HES was undermined by subjective inputs and data manipulation, failing to capture on-the-ground realities. This represented an example of what would become known as the McNamara or quantification fallacy, a critique of the over-reliance on quantifiable data in decision-making processes.Footnote 68 The fallacy arises when decision-makers prioritize measurable variables while disregarding critical qualitative or human factors simply because they are not easily captured through measurable elements. At its core, the fallacy assumes that what cannot be quantifiably measured is irrelevant, which results in flawed reasoning. Particularly in the context of armed conflict, where the environment is highly complex and dynamic, this can produce grave operational, legal and ethical consequences, including increased risks of civilian harm.Footnote 69
In this era, the United States also introduced the term “collateral damage” as a euphemism for civilian harm.Footnote 70 As nuclear tensions rose, so did the use of collateral damage estimation methodologies (CDEMs), or analyses conducted by the military to estimate potential harm to civilians and damage to civilian property during an operation. Initially focused on infrastructure, CDEMs soon shifted toward estimating civilian deaths.Footnote 71 These estimates later formed the basis for non-combatant casualty cut-off values (NCVs), discussed below, which set quantitative thresholds that cognitively influence the interpretation of “acceptable” levels of civilian harm.
Network-centric warfare: Connecting everything to everything else
Building on the United States’ legacy of Vietnam-era quantification, and despite the limitations of those approaches, this evolution in military decision-making continued into the late twentieth century, culminating in the rise of network-centric warfare (NCW), which further embedded data-driven approaches at the heart of operational doctrine.Footnote 72 NCW aims to turn information superiority into combat power by linking sensors, decision-makers and shooters to enable shared awareness, faster decisions, higher operational tempo, and greater lethality, survivability and self-synchronization.Footnote 73 NCW aims to redefine effectiveness through speed, precision and data integration, enabling commanders to act pre-emptively with optimized force configurations.Footnote 74 Within this doctrine, the risk is that legal assessments undertaken by commanders will shift from qualitative, context-sensitive judgements to decision variables optimized by predictive modelling and surveillance data, where speed becomes a critical factor, as explored further in the discussion on AI-DSS below.
The “Global War on Terror”: Predictive analytics, CDEMs and NCVs
With NCW as the foundation, the post-9/11 shift in US military doctrine (mirrored by allied forces during coalition operations) toward counterterrorism and counter-insurgency, combined with rapid technological advances, accelerated the reliance on quantification logics and tools in targeting. The widespread use of armed drones, combined with algorithmic surveillance systems that assess threats and estimate civilian harm,Footnote 75 paved the way for controversial “signature strikes”, targeting individuals based on behavioural “pattern-of-life” analyses rather than confirmed identities.Footnote 76 These strikes reportedly relied on metadata and algorithms to infer threats, often without direct human verification, raising significant IHL concerns about the distinction between civilians and combatants and the obligation to take feasible precautions in attack.Footnote 77 As Schwarz outlines, these technologies reshape human decision-making in war by embedding operators within a technological framework that projects enhanced, data-driven “superhuman” perception and an illusion of moral (and, as argued here, legal) certainty. However, this reliance on algorithmic profiling and quantification logic reduces complex legal and moral judgements to technical processes, enabling “signature strikes” where individuals may be deemed guilty until proven innocent, often only after death.Footnote 78
The United States extended the CDEM highlighted above through a suite of analytical tools during this era.Footnote 79 One of the most widely known, the Collateral Damage Estimation Tool, colloquially known as “Bugsplat”, uses imagery and data to estimate potential civilian harm.Footnote 80 The Population Density Reference Table is a tool that projects likely civilian presence in target areas.Footnote 81 Other systems, such as the Digital Imagery Exploitation Engine and the Digital Precision Strike Suite Collateral Damage Estimation tool, employ algorithms to support strike planning by locating and characterizing targets, conducting choice of weapons and coordinate measurements, estimating collateral damage, and producing output graphics for databases.Footnote 82
These models often omit key factors that are essential for a full understanding of civilian harm, such as the effects of secondary explosions and other reverberating effects.Footnote 83 While these tools project an aura of precision and objectivity, critics contend that they risk reducing human lives to statistical abstractions, obscuring the moral gravity of lethal decisions.Footnote 84 From a legal perspective, such omissions raise IHL concerns, particularly as regards the principles of distinction and proportionality, which require decision-makers to consider all foreseeable effects on civilians. Excessive reliance on algorithmic outputs, particularly in high-tempo decision-making environments, risks sidelining the qualitative and context-specific assessments required under IHL. The speed and efficiency introduced by AI-DSS can create pressure to act quickly, often at the expense of the careful, time-consuming evaluation needed to foresee, assess and minimize civilian harm. This tension between faster decision cycles and the legal obligation to account for all reasonably foreseeable harm can lead to systematic under-estimation of civilian risk.Footnote 85 Accordingly, human judgement must remain central. Quantitative tools must aid, not replace, the deliberate weighing of civilian risk mandated by IHL. Decisions informed by AI-DSS should be complemented by other intelligence sources and contextual analysis, even when doing so slows the decision-making process. Preserving this deliberative space is essential to ensuring compliance with the principles of distinction, precaution and proportionality.
A final example of quantification logic is the use of NCVs, or policy thresholds that define the maximum number of civilian deaths that a commander is authorized to accept as “collateral damage” in a proposed strike before requiring higher-level approval. The United States introduced these metrics in 2011 to streamline proportionality assessments. NCVs sometimes set the threshold as low as zero, effectively prohibiting civilian casualties without senior authorization.Footnote 86 During Operation Inherent Resolve, NCVs varied by target type and urban density – explicit thresholds of thirty were reported to be applied in operations like Iraqi Freedom and the targeting of Osama Bin Laden.Footnote 87
As Corn explains, NCVs were not intended to determine what level of civilian harm is legally acceptable or proportional under IHL; rather, they were designed to establish cognitive and procedural thresholds that prompt commanders to escalate decisions.Footnote 88 Yet simply setting numerical limits shapes how proportionality is assessed, effectively framing when anticipated civilian harm warrants heightened scrutiny relative to expected military advantage.
NCVs function as policy tools that may have the effect of overriding nuanced deliberation, reducing complex legal and ethical reasoning to mathematical or algorithmic calculations. They effectively codify an acceptable level of civilian deaths and risk anchoring that standard into any legal analysis of incidental harm.Footnote 89 By framing anticipated civilian harm narrowly, as a quantifiable, acceptable cost outweighed by military advantage, NCVs risk distorting core IHL principles, reflecting a fundamental inversion of the primacy of the duty of constant care and precautions in attack and subordinating these legal requirements to proportionality logic. This approach risks shifting civilian protection from a paramount obligation to a negotiable or even optional factor, normalizing the perception that civilian harm is routine and unavoidable.
By introducing a fixed numerical threshold for acceptable civilian casualties, the metric also imposes a veneer of objectivity on a judgement that is inherently context-sensitive and qualitative.Footnote 90 Although the intention is for this quantitative benchmark to guide decisions as they scale up through the chain of command, the anchoring effect of a numerical value (discussed in more depth below) exerts significant cognitive influence within MDMPs. The broader integration of AI-DSS risks accelerating this trend, which could further undermine efforts to prevent or minimize civilian harm. Though details are scarce, the Pentagon had officially abandoned NCVs in its doctrine by 2018, citing their ineffectiveness; however, their legacy continues to shape allied practices and military operations today.
The modern era: AI-DSS, quantification and the JTC
Many militaries worldwide are developing AI-DSS and, in some cases, deploying them in active conflicts.Footnote 91 These systems have evolved beyond basic computational tools into highly advanced technologies that collect and analyze vast amounts of battlefield data, generate predictive models, and provide information that can assist in making targeting decisions. Their capabilities include data synthesis, as they can rapidly sift through satellite imagery, drone footage, intelligence reports and signals intelligence to assist in demarcating patterns that help militaries to evaluate battlefield conditions and increase situational awareness.Footnote 92 Machine-learning algorithms that can be fed into AI-DSS can be programmed to identify movement patterns, target profiles or “threats” at greater speed, and therefore scale, than human analysts. And finally, in some scenarios, AI models can simulate potential battle scenarios, feed into CDEMs and offer suggestions for optimization of strike strategies.Footnote 93 AI-DSS can analyze vast intelligence inputs, detect patterns and generate an overview of potential risks,Footnote 94 using predictive models to estimate enemy behaviour, civilian harm and operational outcomes based on historical data.Footnote 95 Despite all these purported benefits, however, these systems also introduce a host of new complexities and risks, as discussed in the following section.
Speed, scale and shifts in cognitive abilities: Risks of AI-DSS integration in the JTC
Many of the advancements described above, reflecting the principles of NCW, aim to enhance decision-making “efficiency” through the rapid speed and scale enabled by AI-driven data outputs.Footnote 96 In the author’s view, the integration of AI-DSS accelerates the shift from context-dependent qualitative judgements to reliance on quantified decision variables, seeking to streamline or replace complex human judgement with automated processes that promise greater speed, consistency and objectivity, all the while raising the risks of falling into the quantification fallacy outlined above. AI seems to be valued in wartime for accelerating target formation and eliminating the “human bottleneck” that slows decision-making, reinforcing the shift toward treating proportionality as an optimizable variable rather than a nuanced legal judgement.Footnote 97 A reliance on AI-driven outputs risks obscuring the inherently qualitative, context-specific considerations that proportionality assessments demand, such as civilian presence, environmental factors and the unpredictable effects of weapon use. By prioritizing speed and computational efficiency, these systems can inadvertently anchor decision-making to numerical or algorithmic metrics, reinforcing the very quantification fallacy that undermines nuanced legal and ethical judgement. Gunneflo and Noll describe this as a quest to streamline information flow,Footnote 98 while Woodcock warns that AI-DSS are being used to “smooth over friction points in human judgment”,Footnote 99 encouraging uncritical reliance on model outputs. In other words, in the drive to “fight at machine speed”,Footnote 100 human decision-makers, limited by cognitive constraints that slow processes, are deliberately pushed to the margins.
Speed and scale: Risks of reinterpreting IHL through AI-DSS
Fighting at machine speed introduces risks and raises important questions about the trade-offs involved. Trying to embed ethical and legal judgements, or portions thereof, into formulas and algorithms (arguably not even possible from a technical engineering perspective),Footnote 101 risks stripping away the qualitative context needed for conducting proportionality assessments. As Pratzner observes, while lower-level functions might be able to partially be automated, core tasks such as target vetting, validation and target nomination – all forming part of a reasonable commander’s judgement – demand time and deliberation.Footnote 102 More generally, there are concerns about the role of these systems and the structure of human–machine teaming within the JTC regarding human cognitive agency over the MDMP.Footnote 103 Specifically, and most relevant to this article, the use of AI-DSS raises critical concerns about how these systems may be reshaping the cognitive processes through which commanders make decisions.Footnote 104
Unlike the full spectrum of complex human reasoning, which includes deductive and inductive thinking, decision-making, and problem-solving,Footnote 105 AI-DSS rely on more constrained computational approaches. Machine-learning models, including transformer-based neural networks designed to process large volumes of sequential data such as text (e.g., LLMs), operate within these narrower, task-specific parameters. They can process entire sequences in parallel, enabling greater efficiency and speed, but they function through probabilistic modelling and operate within predefined parameters set by their design and intended purpose, even if shaped by human input.Footnote 106 Because these systems are inherently limited in their ability to engage with the full range of contextual, legal, moral and experiential factors necessary for “a thorough assessment in good faith of all the different components of [a given] rule as well as the circumstances at the time”,Footnote 107 algorithmically derived recommendations can contribute to a false veneer of objectivity and precision, reinforcing the illusion of quantifiable and data-driven accuracy in targeting decisions. This illusion is amplified by the speed and scale at which such systems are able to operate, forcing humans to keep pace and increasing the risk of automation bias and over-reliance on system outputs in the pursuit of technologically enabled “clean” warfare – which, as Assaad and Williams highlight, is a fundamentally unattainable aspiration.Footnote 108
Over-reliance on AI-DSS may also encourage commanders to progressively disengage from the exercise of critical judgement, leading to cognitive offloading of their tasks and ultimately deskilling.Footnote 109 This disengagement will be reinforced as decision-makers increasingly defer to AI outputs, trusting the system’s apparent rigour over their own nuanced judgement. Over time, this can weaken their ability to critically evaluate contextual factors, interpret ambiguous information or recognize subtle cues that a computational approach cannot capture.Footnote 110 The result is a risk of gradual erosion of domain-specific expertise, leaving human operators less capable of intervening effectively when algorithmic recommendations conflict with legal, ethical or operational considerations.
Rather than drawing upon their own training, operational experience, contextual understanding and sensitivity to legal and moral nuance, military decision-makers risk functioning as mere endorsers of machine-generated outputs so complex that they are impossible to trace or understand, thereby diminishing the role of human agency and responsibility in decisions.Footnote 111 This carries profound ethical and legal consequences and ultimately risks rendering the commander unable to act based on their own decisions and in accordance with IHL, particularly in conducting a lawful proportionality assessment.
When programmed into AI-DSS that assist in calculating proportionality, such a numerical approach encouraged by quantification logics risks becoming self-reinforcing, as the technology can lend a false sense of precision and a veneer of legality to decisions that, in reality, demand qualitative human judgement. This process effectively shifts decision-making away from human commanders, through their own cognitive faculties, and towards a reliance on system-generated outputs, fostering a culture in which suggestions by the AI-DSS risk being “rubber-stamped”Footnote 112 by the human rather than contextually qualitatively attained or critically interrogated.
While commanders have long relied on tools such as CDEMs to support proportionality assessments, these tools can only ever provide a partial view. As experts emphasize, interpreting CDEM outputs depends on practised judgement, intuition, and sound legal and ethical reasoning.Footnote 113 However, the growing use of AI-DSS in the JTC due to aspirations for an ever-increasing tempo in the MDMP risks shifting this balance away from human deliberation and towards increasingly data-driven processes. While AI-DSS can assist human decision-makers by providing rapid data analysis, probabilistic predictions and pattern recognition, they remain inherently limited in capturing the context-specific, legal and ethical nuances that are critical to proportionality assessments. As such, they can at best support, but never replace, the exercise of informed human judgement in targeting decisionsFootnote 114 – and it is essential to be aware that even as assistive tools, these systems introduce risks, including automation bias, over-reliance on algorithmic outputs, and cognitive offloading, which can progressively erode operators’ critical reasoning and domain expertise.
Systemic risks that AI-DSS may pose to IHL interpretation in targeting
The ICRC’s 2024 report on International Humanitarian Law and the Challenges of Contemporary Armed Conflicts highlights that the erosion of protective elements of IHL, coupled with the introduction of new technologies, creates systemic risks for civilians.Footnote 115 Tools that rely on preset numerical thresholds or predictive algorithms can reinforce these risks by prioritizing speed, scale and computational efficiency over context-sensitive, case-by-case legal judgement. When proportionality and distinction are reduced to calculable metrics, even in the absence of case-specific examples, there is a heightened danger that civilians may be exposed to harm which could otherwise have been mitigated through nuanced human assessment.
Operationally, such practices risk recalibrating the application of IHL principles. When high levels of incidental harm are pre-approved through elevated NCVs, especially for lower-level targets, the primacy of the duty of constant care and the precautionary obligation to avoid or minimize civilian harm become displaced. Because targeting is an iterative process, decisions made in phase 4 of the JTC influence earlier steps in subsequent cycles, including CDEM modelling in phase 3. As a result, “acceptable” levels of harm may progressively shift upward, normalizing greater civilian harm and lowering the evidentiary threshold for target verification. This would disregard or even circumvent the duty of constant care under Article 57 of AP I, sidelining precautionary obligations in favour of proportionality assessments and thereby recasting the MDMP itself.
The integration of quantification logics into AI-DSS illustrates the socio-technical reality that operational tools shape how legal and ethical norms are engaged in practice.Footnote 116 Efficiency, speed and data optimization risk outweighing humanitarian considerations, suppressing critical scrutiny of algorithmically generated outputs. Rather than providing a genuine legal justification, reliance on numerical thresholds can create the illusion that decisions are legally or ethically grounded simply because they are algorithmically derived. This reliance risks sidelining or obscuring the careful, context-sensitive legal assessments that IHL requires, leaving human operators dependent on computational outputs while potentially neglecting critical legal considerations. As numerical thresholds become routine, they institutionalize the notion that civilian harm is inevitable and manageable through algorithmic means, further entrenching this reliance.Footnote 117
As AI-DSS systems shape and constrain the informational environment in which commanders operate, the subjective dimensions of proportionality assessments risk being sidelined.Footnote 118 In terms of shaping and constraining, AI-DSS may present a commander with a limited set of recommended courses of action or nominated targets without revealing how many alternatives exist, how the system generated or ranked them, or whether they are truly optimal. In this sense, decision-making can resemble “looking at the world through a straw”, where the commander’s perspective and options are constrained by the system’s narrow and opaque outputs. Incidental civilian harm is codified into numerical tolerances, via NCVs, predictive algorithms or both, reflecting a broader “datafication” of warfare that threatens to erode IHL’s protective aims.Footnote 119 In this setting, human commanders shift from being central legal decision-makers to mere overseers of algorithmic processes. Preserving the integrity of proportionality as a legal standard requires critically examining how AI-DSS and related practices are reshaping the balance between human judgement and machine-driven analysis in armed conflict. Unlike AI-DSS parameters, human cognition allows for situational flexibility, adapting to unforeseen circumstances, interpreting ambiguity and applying legal reasoning in complex scenarios.Footnote 120 Conversely, unlike human cognition, AI systems are ill-equipped to grapple with moral and ethical deliberation, interpret ambiguity, or assess adversary intent and strategic objectives.
From a technical standpoint, algorithmically guided systems will always lack the flexibility to engage with the qualitative dimensions of military decision-making. Their reliance on deterministic data processing limits their ability to navigate uncertainty, deception or incomplete intelligence, factors that are often central to operational judgement.Footnote 121 As Greipl notes, AI’s “forecasting” is based on data analytics of past behaviours and lacks context and human logic.Footnote 122 This renders AI systems unable to properly scrutinize, as required by IHL, whether a proposed objective is a lawful target or whether anticipated incidental harm will outweigh the expected military advantage under the circumstances prevailing at the time. While the predominant worry outlined in this article relates to how AI-DSS may skew proportionality analyses, their potential to undermine distinction is equally significant. Given that misidentification is a primary driver of civilian harm, designers, developers and deployers need a clear understanding of these systems’ error tendencies and must adopt measures to avoid, or at minimum mitigate, such misrecognition, as this understanding also feeds into proportionality assessments, should they become relevant.Footnote 123
AI-DSS are inherently constrained by the scope and quality of their training data.Footnote 124 These limitations mean they are poorly equipped to navigate environments marked by uncertainty, incomplete intelligence or ethically complex dilemmas, all three of which are ubiquitous within the context of armed conflict. In contrast, military commanders are assumed to have the training, capacity and legal obligation to interpret ambiguous information and exercise legal and moral judgement in nuanced and context-dependent situations.
Cognitive impact of AI-DSS: Automation bias, anchoring, offloading and deskilling
Gunneflo and Noll argue that historically, proportionality reasoning has functioned as a mechanism through which new technologies are lawfully integrated, largely through human interpretive judgement. Past innovations left the legal decision-making process intact, with humans still applying existing law. Digital decision support in the military, however, marks a different kind of shift: it enters the cognitive process of legal reasoning itself, potentially displacing the human role to the margins and creating a more direct link between law and technology. AI-DSS, Gunneflo and Noll contend, represent a technological leap that works from within, modifying or even displacing human judgement in unprecedented ways.Footnote 125 Woodcock importantly highlights that while human decision-making is not free from bias, the introduction of AI-DSS brings new forms of bias and alters the very structure of decision-making.Footnote 126 This shift reflects a broader cognitive transformation in warfare, one that risks marginalizing human judgement and displacing it from its central role in lawful and ethical military operations.
Concretely, the integration of AI-DSS into the JTC can introduce biases that can affect a commander’s ability to conduct legally sound proportionality assessments. These systems can subtly transfer decision-making authority from the performance of human judgement to the acceptance of algorithmic output, with knock-on effects for accountability and legal responsibility. The risk of (partial) cognitive offloading – or transferring reasoning tasks to external systems and reducing the mental effort required for decision-making – arises from designs aimed at efficiency and reducing “friction points”, as Woodcock describes.Footnote 127 Because AI-DSS can rapidly process large volumes of data and generate recommendations, commanders may be tempted to assume that the system has accounted for all necessary variables, creating a feedback loop of growing dependence on AI tools. This could raise questions of legal compliance and create operational and strategic risks including erosion of human oversight, accountability gaps, operational dependency and heightened legal exposure.Footnote 128 Mitigating these risks requires sustained training in, and consideration of, IHL compliance at every stage of system design, development, deployment and decommissioning, advancing a model of responsibility or legality by design.Footnote 129
Potential biases introduced by AI-DSS
One way in which cognitive offloading manifests in proportionality assessments during the JTC is through automation bias – that is, the tendency to place undue trust in AI recommendations without critical scrutiny.Footnote 130 This bias can lead to legally non-compliant decisions, especially when AI systems produce errors or uncertainties such as miscalculating risk, omitting qualitative factors or misidentifying a target.Footnote 131 In the JTC, a risk arises when vast targeting lists are generated: the speed of operations can create time pressure that leads to skipping target validation or verification altogether.Footnote 132 Treating AI-DSS outputs as optimized solutions can erode commanders’ critical engagement and heighten automation bias, especially in the later JTC phases (4 and 5) where precautions and proportionality must be (re)assessed under evolving conditions.Footnote 133 Simply put, at machine speed, the space for moral and contextual legal reasoning shrinks, creating the risk that human oversight will become little more than a procedural rubber stamp.
Anchoring bias is another cognitive shift that AI-DSS can introduce, directly affecting proportionality assessments. Anchoring occurs when an initial piece of information disproportionately influences subsequent decision-making. In target selection and nomination, for example, an algorithm might place a certain individual at the top of a targeting list, anchoring perceptions of that target’s military value.Footnote 134 Similarly, casualty or damage estimates from CDEMs in phase 3 may lead commanders in phase 4 to subconsciously adjust their judgements around this initial figure (even when new information suggests a different conclusion) despite their legal obligations to take all feasible precautions in attack in order to avoid or in any event minimize civilian harm.
NCVs are a concrete example of anchoring bias in practice. Once commanders internalize a set threshold of “acceptable” civilian harm for higher approval, it can shift what is perceived to be legally necessary or feasible under the duties of precaution and constant care. When proportionality becomes a numerical exercise, where figures like twenty, thirty or even hundreds of civilian deaths are weighed against eliminating a single target, it raises fundamental questions about whether the principles of precautions in attack and proportionality are truly being upheld, and whether such an approach is compatible with IHL’s object and purpose of limiting the effects of armed conflict by protecting civilians and restricting means and methods of warfare.
A final cognitive concern is deskilling or (meta)cognitive erosion, which is the gradual loss of a commander’s ability to conduct complex assessments when repeatedly following AI recommendations without exercising independent judgement. Empirical evidence has shown that increased reliance on automated support erodes individuals’ knowledge and undermines their confidence in making independent decisions.Footnote 135 For example, deskilling through over-reliance on AI has recently come to the attention of the medical field, with one study showing that doctors’ ability to detect cancer devolved through AI-enabled decision support.Footnote 136 This risk is not limited to the medical field and will be present across all sectors where these technologies are integrated.Footnote 137 Translated to the military context, over time, this raises serious questions about long-term operational readiness and the capacity to make critical decisions without AI assistance.Footnote 138
Returning to the earlier discussion on aligning CDEMs and battle damage assessment and after-action reports for a fuller understanding of civilian harm, in practical terms, during phase 6 of the JTC, the duties of constant care and of taking precautions in future attacks require comparing anticipated harm with actual effects and feeding that information into subsequent proportionality assessments. If commanders lose this evaluative capacity through over-reliance on algorithmic tools, whether by failing to incorporate a comprehensive understanding of civilian harm into battle damage assessments or after-action reporting or by skipping or condensing these steps under operational time pressure, cognitive offloading and deskilling can occur. This not only risks non-compliance with the proportionality rule but also undermines compliance with rules related to the precautionary principle and the duty of constant care.Footnote 139
Conclusion: Human(e) judgement, AI-DSS and proportionality assessments
This author has argued, along with others elsewhere, that framing AI-DSS as mere tools has led to an under-estimation of their impact on cognitive decision-making within the JTC.Footnote 140 Limited transparency about AI-DSS design and use, coupled with the absence of focused scrutiny, has left a critical gap in understanding. The international debate’s persistent focus on autonomous weapon systems (AWS) has further obscured the growing influence of AI-DSS, which lack a comparable regulatory forum. To address this, it is essential to reassert the primacy of human legal and ethical reasoning in targeting decisions, drawing from lessons on human–machine interaction and AWS governance. Greater awareness of how AI-DSS affect human cognition and deliberation must be promoted to advance inclusive, informed debate on the risks and structural shifts that these systems introduce.
To ensure compliance with IHL’s protective aims, the principles of distinction and precautions in attack must remain primary, with proportionality assessments (which also include precautionary requirements) only undertaken as a final safeguard once all feasible civilian harm avoidance or reduction measures are exhausted. Proportionality is not a formula; it requires contextual, interpretive judgement grounded in legal reasoning, operational realities and humanitarian concerns. AI-DSS can inform but never replace this process. Deliberative legal spaces, meaning the points in the targeting process at which legal decisions must be made, should be built into JTC workflows to ensure that these decisions rely on qualitative, precedent-informed analysis rather than machine-speed processing.
Training must emphasize analogical reasoning over numerical thresholds, approaching civilian harm as a broad multidimensional reality. Civilian perspectives, gathered through consequence reviews, red teaming,Footnote 141 community impact reports and partnerships with non-governmental organizations, should be incorporated into post-strike analysis to ensure that lived experiences shape future precautions and proportionality assessments. Aligning ex ante civilian harm estimates with ex post assessments enables lessons from past operations to refine future decision-making and adjust methods where harm is consistently over- or under-estimated.
Integrating AI-DSS into targeting operations requires considering and designing for legal and moral discretion and compliance at every stage of the AI system life cycle through legal reviews and safety assurance frameworks around human–machine teaming, preserving cognitive friction and space and time for decision-making rather than engineering them out. This means resisting system designs that accelerate decision-making to the point where opportunities for critical assessment are reduced or eliminated. Military decision-makers require moments of deliberation to evaluate intelligence, scrutinize algorithmic recommendations and apply IHL principles. Instead of engineering these pauses out in the name of efficiency or tempo, AI-DSS should be built to maintain, and, where necessary, create space for reflection and challenge. Addressing this may mean embedding human decision checkpoints, tailoring timelines to operational risk and slowing down high-stakes proportionality assessments, somewhat countering the speed incentives of AI-DSS integration. In essence, the use of AI-DSS in targeting procedures may offer operational advantages, but militaries and other stakeholders must also be aware of the new risks they introduce through shifting the choice architecture in the MDMP. Training must address not only system operation but also the risks of automation bias, cognitive offloading, over-reliance and deskilling.
In practice, AI-DSS shape, not just support, targeting decisions. Global and national fora, such as the UN General Assembly or the Global Commission on Responsible AI in the Military Domain (GC REAIM), must expand the debate beyond AWS to address AI-DSS’s systemic influence on precautions, proportionality and wider legal obligations. In short, responding to the growing influence of AI-DSS in targeting demands more than technical adjustments – it requires reaffirming the human(e) centre of lawful decision-making. Proportionality assessments are contextually based legal and moral judgements that must remain rooted in qualitative human reasoning, experience and accountability. Ultimately, preserving the integrity of IHL in the age of AI will depend not on how advanced the systems become, but on how firmly stakeholders insist that targeting decisions remain rooted in context-appropriate human judgement: legally grounded, morally aware, and irreducible to algorithmic logic.