Hostname: page-component-74d7c59bfc-hfsb5 Total loading time: 0 Render date: 2026-02-12T00:08:14.267Z Has data issue: false hasContentIssue false

“Borgs in the org” and the decision to wage war: The impact of AI on institutional learning and the exercise of restraint

Published online by Cambridge University Press:  27 January 2026

Toni Erskine*
Affiliation:
Coral Bell School of Asia Pacific Affairs, Australian National University (ANU), Canberra, ACT, Australia Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
Jenny L. Davis
Affiliation:
Department of Sociology, Vanderbilt University, Nashville, TN, USA School of Sociology, ANU, Canberra, ACT, Australia
*
Corresponding author: Toni Erskine; Email: Toni.Erskine@anu.edu.au
Rights & Permissions [Opens in a new window]

Abstract

In this article, we maintain that the anticipated integration of artificial intelligence (AI)-enabled systems into state-level decision making over whether and when to wage war will be accompanied by a hitherto neglected risk. Namely, the incorporation of such systems will engender subtle but significant changes to the state’s deliberative and organisational structures, its culture, and its capacities – and in ways that could undermine its adherence to international norms of restraint. In offering this provocation, we argue that the gradual proliferation and embeddedness of AI-enabled decision-support systems within the state – what we call the ‘phenomenon of “Borgs in the org”’ – will lead to four significant changes that, together, threaten to diminish the state’s crucial capacity for ‘institutional learning’. Specifically, the state’s reliance on AI-enabled decision-support systems in deliberations over war initiation will invite: (i) disrupted deliberative structures and chains of command; (ii) the occlusion of crucial steps in decision-making processes; (iii) institutionalised deference to computer-generated outputs; and (iv) future plans and trajectories that are overdetermined by past policies and actions. The resulting ‘institutional atrophy’ could, in turn, weaken the state’s responsiveness to external social cues and censure, thereby making the state less likely to engage with, internalise, and adhere to evolving international norms of restraint. As a collateral effect, this weakening could contribute to the decay of these norms themselves if such institutional atrophy were to become widespread within the society of states.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press.

1. Introduction

Artificial intelligence (AI) will increasingly infiltrate what is arguably the most consequential decision that we can collectively make: the decision to wage war (Deeks et al. Reference Deeks, Lubell and Murray2019; Erskine & Miller, Reference Erskine and Miller2024a). Ample attention has been paid to the emergence and evolution of AI-enabled systems used in the conduct of war, including lethal autonomous weapons systems under the confronting banner of ‘killer robots’. However, the prospect of AI driving the necessarily prior stage of war-making – the determination of whether and when to engage in organised violence – has received less attention.Footnote 1 Following recent studies that have begun to redress this relative neglect by examining particular risks and opportunities that would accompany the infiltration of AI into state-level resort-to-force decision making (see contributions to Erskine & Miller, Reference Erskine and Miller2024b), this article will propose a hitherto overlooked risk of this anticipated development. Namely, we suggest that such AI-enabled systems have the potential to transform the very structures, cultures, and capacities of those states that rely on them – and in a way that could diminish their propensity to adhere to international norms of restraint.

The imperative to exercise restraint is fundamental to the ethics and laws of war. It underlies the principles that guide states’ decision making when engaging in armed conflict. The possibility that relying on AI-enabled systems in such deliberations could affect a state’s adherence to these international norms of restraint demands serious attention. A case has already been made, in a separate paper, that the gradual introduction of AI-driven systems into resort-to-force decision making risks undermining our collective adherence to international norms of restraint in two ways: (i) by fostering the misperception that those responsible for exercising forbearance – states, political leaders, and military leaders – are off the moral hook when relying on such systems (‘the risk of misplaced responsibility’), thereby creating the illusion that the task of complying with these norms lies elsewhere; and (ii) by inserting unwarranted certainty and singularity into complex prospective judgements (‘the risk of predicted permissibility’), thereby prompting a recalibration of what counts as adherence to them (Erskine, Reference Erskine2024b). The argument in both cases is that while our commitment to international norms of restraint may endure – evident, for example, in passionate and often genuine appeals to their significance and standing – compliance with these norms may be inadvertently sacrificed.Footnote 2

Our purpose here is to propose and interrogate a third, distinct way that the infiltration of AI-enabled systems into decisions of whether and when to wage war could undermine adherence to international norms of restraint. In identifying this third potential risk, our focus will be on how AI-enabled systems could weaken what we maintain is the state’s crucial capacity for ‘institutional learning’ and, as a result, its responsiveness to international social pressure and censure. We label this additional challenge ‘the risk of institutional atrophy’ and link it to what we call the ‘phenomenon of “Borgs in the org”’. We thereby provide an initial provocation, conceptual framework, and set of challenges to be tested and explored in future research.

In pursuing this preliminary analysis, we take four steps. We begin by briefly sketching how AI can support – and sometimes displace – human decision making, highlighting the corresponding ways that AI might contribute to the decision to wage war, and clarifying our particular focus in this article. We then turn to the international normative landscape in relation to which states make the decision to engage in (or refrain from engaging in) armed conflict, and the value of employing AI-enabled systems to assist states to navigate this terrain. Third, we make the dual claim that the state-level agency displayed in the very acts of contemplating and waging war is accompanied by a capacity for ‘institutional learning’ and, moreover, that this capacity is fundamental to the state’s responsiveness to, and internalisation of, the values and constraints that define this normative landscape. Fourth, we anticipate how the gradual integration of AI-enabled systems in state-level decision making on the resort to war will engender subtle but significant changes to the state’s deliberative and organisational structures, its culture, and its capacities. Specifically, we ask what these changes mean for the state’s ability to learn at the level of the institution as a whole and thereby meaningfully respond to both evolving international norms of restraint and censure when they are abrogated.

2. AI and decision making

We define AI as computing technologies that mimic aspects of intelligent human behaviour. They can operate through simple rule-based algorithms (indicating if this happens, then that follows). Alternatively, they can operate through machine learning (ML) techniques. ML is integral across AI applications in natural language processing, machine vision, and anomaly detection, among others. It uses inferential models trained to classify data points and identify patterns within an existing dataset for purposes of generalisation to novel data sources. ML is therefore anchored by a history that has been quantified and encoded. This quantified history shapes how the present is framed and understood, while informing, projecting, and thus producing future action. In addressing the potential use of AI-enabled systems in resort-to-force decision making, we focus on current technologies that employ ML techniques.

Across multiple domains and through a range of applications, ML decision systems perform two basic tasks: prediction and diagnostics. By prediction, we mean forecasting probable outcomes in a particular case, based upon the correspondence between characteristics of that case and patterns detected in completely distinct cases analysed as part of the ML model’s training data. For example, courts and banks use ML models for predictive risk analysis, discerning the likelihood of recidivism or loan default for a defendant or applicant, respectively (Angwin et al. Reference Angwin, Mattu and Kirchner2016; Kumar et al. Reference Kumar, Saheb, Preeti, Ghayas, Kumari, Chandel and Kumar2023); hiring platforms and admissions offices use ML to predict employee and student success (Ajunwa, Reference Ajunwa2023; Marcinowski, Reference Marcinowski2022); and ‘smart’ appliances predict the risk of a household running low on milk, bread, or other consumables (Strengers & Kennedy, Reference Strengers and Kennedy2020).

By diagnostics, we mean the use of data points to convert an ambiguous set of circumstances into a legible story. This is a sense-making application that sets the background conditions from which action stems. Medicine is the most obvious example, utilising ML to diagnose illness or disease through a compilation of symptoms and bio-indicators (Sidey-Gibbons & Sidey-Gibbons, Reference Sidey-Gibbons and Sidey-Gibbons2019). Diagnostics also underpin emotion detection, inferring inward feelings from outward bodily expressions (Kim et al. Reference Kim, Bryant, Srikanth and Howard2021), price-setting tools that diagnose the underlying conditions of a market (Calvano et al. Reference Calvano, Calzolari, Denicolò and Pastorello2020), and test proctoring programs that translate situational factors into an assessment of probable (dis)honesty (Nigam et al. Reference Nigam, Pasricha, Singh and Churi2021).

Though distinct, predictions and diagnostics are interrelated. Diagnosing a situation may suggest likely (i.e., predictive) outcomes, while predicted trajectories can become part of the diagnostic story. As demonstrated in the above examples, predictive and diagnostic ML can inform human decision making. These ML decision tools can also act autonomously. For example, financial assessment scores may activate an automatic loan approval or denial, AI-enabled appliances may auto-order groceries, and market analyses may pre-set prices for apartment rentals in a particular region.

We have every reason to believe that these ML-driven tools will increasingly infiltrate – and, to some degree, are already beginning to influence – state-level decisions about the resort to force.Footnote 3 Indeed, when we turn to this high-stakes context, the same predictive and diagnostic functions just outlined are relevant and valuable. Moreover, within this domain, we can also usefully distinguish between AI-driven decision tools that would supplement and support human deliberations and those that would act autonomously. In terms of the latter, AI-enabled systems could conceivably be employed as metaphorical ‘AI generals’ to which authority would be temporarily delegated to independently calculate and carry out specific courses of action, including those that would constitute a resort to war.Footnote 4 This could occur, with the technology that we have now, through various manifestations of automated self-defence, such as the potentially volatile interactions (and unintended escalations) between autonomous aerial or underwater vehicles; the reactions of automated air and missile defence systems; and automated responses either to cyber-attacks, or more alarmingly (and hopefully hypothetically) to indications of a nuclear strike (see, e.g., Andersen, Reference Andersen2023; Deeks, Reference Deeks2024a, Reference Deeks2025b; Deeks et al. Reference Deeks, Lubell and Murray2019, pp. 7–10, 18–19; Wong et al. Reference Wong, Yurchak, Button, Frank, Laird, Osoba and Bae2020; Zala, Reference Zala2024).

Alternatively, and seemingly more innocuously, AI-enabled systems could be used to inform human decision making on whether and when to wage war, with human agents – acting either individually or in collective decision-making bodies – as the final arbiters (see, e.g., Davis, Reference Davis2024; Deeks et al. Reference Deeks, Lubell and Murray2019, pp. 5–7, 10–11; Erskine, Reference Erskine2024b). Although such non-autonomous systems may conceivably substitute for individual human actors along the decision-making chain, they would support rather than displace those human actors ultimately burdened with the decision of whether to resort to force. We remain interested in this latter, prospective scenario: AI-enabled ‘decision-support systems’ (as they are commonly labelled) used to aid complex human deliberations over whether and when to wage war, including navigating the international normative landscape that defines the very limited cases in which such action is ethically and legally justified. It is to this normative landscape that we now turn.

3. Jus ad bellum norms and the prospect of AI-assisted restraint

Established ethical and legal guidelines demand that states exercise restraint when deciding whether and when to wage war. With the aim of suggesting some of the resort-to-force assessments that the predictive and diagnostic functions of AI-enabled decision-support systems might underpin, we briefly address these ethical and legal guidelines as they are articulated within the just war tradition and international law, respectively.

Ethical constraints are set out most prominently in the ‘just war tradition’, a centuries-old, evolving consensus on principles that set parameters for just and unjust behaviour in the context of war (see Johnson, Reference Johnson1981). These just war principles entail responsibilities to exercise restraint – even amidst the chaos of contemplating and conducting war. They are generally divided into two categories: jus ad bellum principles, which address the resort to organised violence; and jus in bello principles, pertaining to the conduct of war. In contrast to much recent work on military applications of AI, our focus on the decision to wage war takes us into jus ad bellum territory.

Ethical principles within this jus ad bellum branch of the just war tradition establish strict conditions that must be met for the resort to force, or its continuation, to be permissible (for an introduction, see Lazar, Reference Lazar and Zalta2020). These include the following: that waging war be a last resort (in other words, that it satisfy the requirement of necessity, meaning that its legitimate aims cannot be achieved by less harmful means); that the war as a whole meet the standard of proportionality (in that the harm that it would cause not outweigh the good it seeks to achieve); that there be a reasonable chance of success; and, most significantly, that there be a just cause to engage in armed conflict.Footnote 5 Post-1945, just cause tended to be conceived narrowly within just war thinking as self-defence and defence of others against aggression. Nevertheless, especially from the 1970s, just-war-inspired ethical arguments increasingly extended just cause to include the protection of peoples across state borders – arguments that have been couched in the language of ‘humanitarian intervention’ and, since the early 2000s, a ‘responsibility to protect’ (e.g., International Commission on Intervention and State Sovereignty, 2001; O’Driscoll, Reference O’Driscoll2008, pp. 14–18, 73–4; Walzer, 1977/Reference Walzer2015, p. 107; Wheeler, Reference Wheeler2000).

As well as being rehearsed and revised over centuries within the just war tradition, jus ad bellum conditions are institutionalised in international law. Customary international law recognises principles of proportionality and necessity. Footnote 6 Moreover, just cause is enshrined in the United Nations (UN) Charter (United Nations, 1945), which dictates that states must ‘refrain… from the threat or use of force against the territorial integrity or political independence of any state’ (Chapter I, art. 2, para. 4), with the explicit exception of ‘the inherent right of individual or collective self-defence if an armed attack occurs’ (Chapter VII, art. 51).Footnote 7 Although not addressed in the UN Charter, and subject to debate, this exception has been argued by some states, just war theorists, and international lawyers to allow for a degree of anticipatory self-defence, or the defensive resort to force before an expected armed attack.Footnote 8 As for an institutionalisation of the expanded notion of just cause alluded to above, all member states of the UN endorsed the so-called ‘responsibility to protect’ vulnerable populations from mass atrocity crimes in 2005 at the World Summit (United Nations, 2005; United Nations Secretary General, 2009). Although the final agreement remains legally non-binding (Byers, Reference Byers, Thakur and Maley2015, p. 101; Glanville, Reference Glanville2021, p. 111), the ‘responsibility to protect’ has been invoked in UN General Assembly and Security Council resolutions and there appears to be agreement that the Security Council is legally empowered to authorise military interventions in response to genocide, ethnic cleansing, war crimes, and crimes against humanity.Footnote 9 This provides another strictly delimited exception to the prohibition on the resort to war that demands careful justification and a detailed assessment of relevant circumstances before it can be invoked.

In light of these ethical and legal jus ad bellum principles, which define clear responsibilities of restraint by applying strict legitimizing conditions to the resort to force, we make three observations that are germane to the discussion that follows. First, while these principles place demanding responsibilities to exercise restraint on the shoulders of our most senior political and military leaders, the key loci of responsibility are those powerful, complex, structured, decision-making institutions known as states. Simply, when it comes to jus ad bellum considerations, the most prominent and important ‘moral agents of restraint’ (Erskine, Reference Erskine2024a, pp. 550–1) – or bodies that we can reasonably expect to discharge duties of forbearance given their capacities and roles – are collective actors.Footnote 10 In the following sections, we suggest that this requires attention not only to how the state, as an independent agent in itself, is likely to interact with such AI-driven systems, but also to how the state, as a specifically institutional agent, becomes constituted and thereby transformed by such systems.

Second, these jus ad bellum principles represent powerful international norms. By international norms, we mean widely accepted, internalised principles that entail established codes of what actors should do, or refrain from doing, in relation to particular practices (Erskine & Carr, Reference Erskine, Carr, Osula and Röigas2016, pp. 89–95). Importantly, these shared expectations represent a dynamic consensus. They evolve alongside changing global circumstances, state practice, and espoused values – as demonstrated, for example, by the still-evolving jus ad bellum principle of just cause when it comes to considerations of both anticipatory self-defence and human protection. Such international norms both compel and constrain the behaviour of actors in world politics. Even in their abrogation, international norms are tacitly acknowledged in the attempts of states that deviate from them to hide, excuse, or justify their behaviour (Frost, Reference Frost1996, p. 105–6). They are also revealed in the condemnations of actors that witness such violations. Jus ad bellum principles carry prescriptive and evaluative force. These international norms thereby have a profound effect on how states act, defend their actions, and perceive themselves and others. As such, they are a vital part of states’ strategic deliberations.

Third, in terms of states’ compliance with these international norms of restraint, or at least efforts to demonstrate compliance, the predictive and diagnostic capacities of AI-enabled decision-support systems carry clear strategic potential. Simply, they could help states determine and defend legitimate exceptions to the prohibition on the resort to force. The jus ad bellum principles cited above require states to make complex prospective assessments under conditions of uncertainty (Erskine, Reference Erskine2024b, p. 179). These include appraisals of the likely future actions of their adversaries and potential adversaries (particularly critical in cases of proposed anticipatory self-defence); and the projected consequences of their own courses of action under consideration, in addition to the effects of inaction (crucial for judgements of proportionality, last resort/necessity, and reasonable chance of success). Adherence to these principles also demands that states decipher multifaceted signs of aggression as it unfolds (fundamental to claims to self-defence), as well as emerging threats to vulnerable populations outside their borders (important if we accept that just cause includes particular cases of human protection in addition to self-defence).

It is not at all far-fetched to imagine that AI-enabled decision-support systems could provide valuable analyses that would support such assessments. They might, for example, forecast future acts of aggression by establishing correlations between current circumstances and patterns identified in historical cases. Or, these decision-support systems could uncover indications of impending mass atrocity crimes by drawing on such patterns to make sense of a vast array of data that would be beyond the power of human cognition to decipher. The outputs of these interrelated predictive and diagnostic functions could then provide invokable evidence towards attempts to justify war initiation for purposes of anticipatory self-defence or human protection.

Importantly, when it comes to forecasting acts of aggression, such methods have already been trialled. A United States (US) special forces unit of intelligence officers, working alongside Silicon Valley contractors in relation to insurgent attacks in Afghanistan between 2019 and 2021, demonstrated that neural networks – a type of ML composed of interconnected nodes reminiscent of the human brain – can be trained ‘to identify the correlation between historical data on violence and a variety of open (i.e., non-secret) sources, including weather data, social-media posts, news reports and commercial satellite images’ (‘How America built an AI tool to predict Taliban attacks’, 2024). This experiment yielded a model, ‘Raven Sentry’, which exemplifies the predictive potential of ML-driven decision-support systems that could be used in the context of jus ad bellum assessments. Raven Sentry was reportedly able to predict with 70 per cent accuracy not only when and where particular attacks would occur, but also the number of fatalities that would be caused by those attacks (‘How America built an AI tool to predict Taliban attacks’, 2024).

Separately, the AI-driven national security consultancy Rhombus Power claims to have anticipated the Russian invasion of Ukraine four months before it occurred and estimated the start of the war ‘almost to the day’ approximately four weeks before it broke out by using ML models to reveal otherwise imperceptible indicators of preparation for aggression (McChrystal & Roy, Reference McChrystal and Roy2023).Footnote 11 Rhombus Power representatives laud the ‘speed and power’ of what they describe as their ‘data aggregation and sense-making models’, which ‘learn by sifting through past data – in our case, about 10 years’ worth, going back to just before Russia’s 2014 invasion of Crimea’. They explain that these models ‘look for patterns’ in the sense of identifying correlations: ‘[w]henever X has happened in the past, Y has often been the outcome’ (McChrystal & Roy, Reference McChrystal and Roy2023). Illustrating the diagnostic function of ML models highlighted above, they explain that ‘with the help of AI’, it is possible to ‘make sense of all types of data, including early indicators, suspicious financial fingerprints, logistics activities, weapons flows, and subtle changes in infrastructure construction, as well as the tone and content of media reports’. They conclude that ‘[t]he result is a digital nervous system that warns decision-makers about gathering threats’ (McChrystal & Roy, Reference McChrystal and Roy2023), thereby mobilising AI diagnostics to identify impending aggression – and uncover evidence that could inform resort-to-force deliberations in response.

Such predictive and diagnostic capabilities of ML models – combined with both a growing reliance on AI in every realm of human decision making and states’ well-founded fear of being left at a disadvantage vis-à-vis their adversaries if they fail to pursue and develop AI-enabled military capacities (Erskine & Miller, Reference Erskine and Miller2024a, p. 138) – lead us to believe that AI-driven decision-support systems will increasingly be used by states to determine whether and when to wage war, and, particularly, if is justifiable to do so. But, to what collateral effect?

As already noted, there are a number of potential risks to the exercise of restraint that would accompany the use of such AI-driven decision-support systems to determine whether the resort to force is justified (Erskine, Reference Erskine2024a). We are interested here specifically in whether the gradual integration of AI-driven systems into the state’s organisational and decision-making structure – in order, inter alia, to help navigate international norms of restraint – would change the nature of the state itself in a way that could exert an unintended, countervailing effect on its compliance with these norms. Consideration of this question requires that we turn briefly to an account of the institutional agency of the state and the capacities that accompany it.

4. Organised violence, institutional agency, and the prospect of ‘institutional learning’

States wage war.Footnote 12 Although individual human actors contribute to every aspect of both the decision to launch a war and its execution, organised violence is necessarily a collective endeavour. Indeed, the common use of ‘organised violence’ as a synonym for war is instructive. Unlike the individual decisions and actions of soldiers on the battlefield, contemplating and waging war entail both deliberation and action at the level of the institution, or formal organisation, as a whole. As Michael Walzer (1977/Reference Walzer2015) observes, war ‘is a matter of state policy, not of individual volition’ (p. 39). It is also, we would elaborate, a matter of the collective decision making, coordinated action, and (something akin to) institutional ‘intention’ that underlies state policy. In short, war both requires and exemplifies the institutional agency of the state.Footnote 13

4.1 Institutional agency

By ‘institutional agency’ we mean the capacity for purposive action at the corporate level of an organisation. Of course, ‘institution’ can mean different things. As mentioned above, we are invoking it in this discussion in the sense of formal organisation, or ‘structured institution’ (Shepsle, Reference Shepsle, Binder, Rhodes and Rockman2008, p. 27). Yet, it might be useful, momentarily, to turn to another connotation of the term: ‘institution’ as an established and persistent set of rules, customs, and practices (e.g., Bull, Reference Bull1977, p. 74; Keohane, Reference Keohane1988, p. 382–6). The label institutional agency can then also help to explain how many formal organisations, including most states, are able to reach decisions and act in ways that cannot be described simply in terms of the sum of decisions and actions of their constituents. This label gestures towards the cultures and overarching structures of rules, practices, organisational hierarchies, and decision-making procedures that help to constitute such bodies – and which, in turn, frame, channel, and transform the intentions and actions of the individual human agents within them (Erskine, Reference Erskine2010, p. 265, Reference Erskine2020, p. 508). We can – and indeed do – talk about the state itself reaching a decision and acting.Footnote 14

The state as an institutional agent has impressive capacities to deliberate and act. It can deliberate in the sense of accessing and processing information – and generally has more sophisticated capacities to do so than any individual human actor (Erskine, Reference Erskine2001, pp. 73–5; O’Neill, Reference O’Neill1986, p. 63). Moreover, its formal decision-making procedures can commit the state to a policy or course of action that is different from the individual positions of some (or all) of its members (Erskine, Reference Erskine2014, p. 119; French, Reference French1984 chapters 3–4; Pettit, Reference Pettit2001, chapter 5). Importantly, such bodies are also able to realise group decisions through organisational structures that coordinate the roles of their constituents and achieve complex levels of integrated action within established frameworks of norms and practices. We might think here of Max Weber’s (Reference Weber, Speirs and Lassman1994) ‘living machine’ of bureaucratic organisation – with its ‘…delimitation of areas of responsibility, its regulations and its graduated hierarchy of relations of obedience’ (p. 158) – whereby the discrete skills and roles of individual human actors are compartmentalised and coordinated to make possible overarching collective endeavours.Footnote 15 Together these formal decision-making procedures and mechanisms for translating decisions into actions and achieving complex levels of coordinated action result in a capacity for purposive action at the level of the organisation as a whole. In sum, in the context of some acts and omissions, states can be considered independent agents in their own right.

Why does this matter? To begin, this concept of institutional agency allows us to more accurately describe and better understand the complex decision-making processes, organisational structures, and institutional cultures that are central to practices such as contemplating and waging war. This notion of institutional agency also carries conceptual weight when we talk about prescribing and evaluating actions. Responsibility can be attributed to the state as an institutional agent. This applies both in the prospective sense of directing certain expectations towards the state (e.g., that it discharge responsibilities of restraint in the resort to force) and in the retrospective sense of blaming the state and holding it to account for harm and wrongdoing that is not reducible to the acts and omissions of its individual members.

This understanding of institutional agency, and the attribution of responsibility that it supports, is consistent with how we talk about states in the context of jus ad bellum judgements. Russia has been widely condemned for waging a war of aggression against Ukraine (e.g., ‘EU Says “Foundations’ Laid for Ukraine War Tribunal, 2025); Israel has been exhorted to engage in a proportionate response to the 7 October 2023 attack by Hamas and censured for failing to do so in the context of its bombardment of Gaza (e.g., Atkins, Reference Atkins2025; Haque, Reference Haque2023). Simply, with respect to the initiation, escalation, or continuation of war, and the decision making that underlies each course of action, the state is considered both a relevant agent and, to return to our earlier label, a ‘moral agent of restraint’.Footnote 16 Moreover, when it comes to these complex, structured institutions, along with a capacity for purposive action at the corporate level come more sophisticated, corollary capacities. One such capacity is tacitly acknowledged in the way we direct exactly these types of normative appeals and condemnations at states, thereby assuming that they can reflect on and reform their behaviour. Namely, states also have a capacity for ‘institutional learning’.

4.2 Institutional learning

States not only possess capacities for deliberation and action (such that we understand them as agents in their own right in the context of particular actions), but are also capable of reflexivity. States’ formal decision-making structures allow them to reflect on possible courses of action and their likely outcomes, as well as on the consequences of their past conduct, and to evaluate both as they consider future plans and policies. Importantly, this capacity for critical self-reflection contributes to a potential for ‘institutional learning’.Footnote 17

We understand learning with reference to the paradigmatic case of individual human beings as ‘a process of reflecting on past experiences – and the consequences of previous acts and omissions – in a way that leads to enduring change to subsequent behaviour’ (Erskine, Reference Erskine2020, p. 505). An institutional agent, such as a state, can learn in a way that is not reducible to its individual members. Through formal deliberative processes, states are able to look back and consider the consequences of their previous acts and omissions, evaluate these in light of either external expectations or their internal goals and espoused values, and, as a result, commit to revising (or reinforcing) their rules, policies, procedures, and organisational culture, leading to a process of internal reform (Erskine, Reference Erskine2020, p. 508–9, Reference Erskine2024a, p. 541). Institutional learning thereby involves two processes that occur at the level of the organisation as a whole: (i) reflection and deliberation on past experiences, and the consequences of previous acts and omissions, resulting in a commitment to internal change; and (ii) the actual implementation of change through structural reform, which affects the future conduct of the organisation independently of shifts in attitudes, beliefs, or commitments on the part of the organisation’s individual constituents (Erskine, Reference Erskine2020, p. 509).

The state’s potential to learn is profoundly important. However, before elaborating on this significance, it is important to pause and offer a point of conceptual clarification. The learning that we understand individual human beings and formal organisations to be capable of is fundamentally different from the metaphorical ‘machine learning’ that underpins AI technologies. Just as AI-driven systems mimic intelligent human behaviour, so too do they mimic learning. Machines process data through nodes in a constructed network, altering subsequent inferences and predictions. These mathematical operations are distinct from the critical self-reflection and corresponding deliberate change requisite to individual human and institutional learning.Footnote 18 As AI systems are incapable of reflexivity, genuine learning is currently beyond their reach. Nevertheless, as we suggest in the following section, their integration into organisational structures and decision-making processes has the potential to affect states’ learning.

The state’s capacity for institutional learning matters profoundly when it comes to the state engaging with, responding to, and embracing evolving international norms, such as those that govern the resort to force. Although we maintain that external prompts are not required for institutional learning, states can – and do – engage in this critical self-reflection and corresponding change in response to external appeals and censure.Footnote 19 Indeed, this potential for institutional learning makes sense of charges of wrongdoing and entreaties for reform directed at the state when international norms are abrogated. The state can reflect on its actions in light of such external appeals for appropriate behaviour and condemnations of its conduct and, as a result, enact reforms (or, alternatively, reinforce existing policies if this evaluation reaffirms the appropriateness of previous courses of action).

One example of institutional learning, which is particularly apt in the context of the current discussion, is the United Kingdom’s (UK) formal review and process of reform in the wake of the 2003 Iraq War. The UK (along with the US) had been widely condemned for violating ethical and legal principles of restraint by initiating war against Iraq in March 2003. In 2009, UK Prime Minister Gordon Brown announced a formal public inquiry into the UK’s role in the war, which was published seven years later as the ‘Iraq Inquiry’ (or ‘Chilcot Inquiry’, after its chairman Sir John Chilcot). One key subject for investigation was the UK’s justification of its decision to go to war and join the US-led invasion of Iraq. A meticulous examination of the state-level deliberative processes that led to war, with direct reference to violations of jus ad bellum norms, resulted in a damning assessment. The Inquiry concluded that peaceful alternatives to armed conflict had not been exhausted; in the words of Chilcot, ‘[t]he point had not been reached where military action was a last resort’ (Chilcot, Reference Chilcot2016, p. 47, para. 339). Moreover, with respect to just cause, the verdict was that the case for (anticipatory) self-defence had not been established; Saddam Hussein had not posed an imminent threat (Chilcot, Reference Chilcot2016, pp. 40–47). In response to the failures identified, the Inquiry set out ‘lessons’ with ‘general application’ beyond the specific circumstances of the 2003 Iraq invasion to guide future state action and ‘collective Ministerial decision-making’ (Chilcot, Reference Chilcot2016, p. 129, para. 826). In the words of UK Prime Minister David Cameron at the time the report was released, the goal was to ‘learn the lessons of what happened and what needs to be put in place to make sure that mistakes cannot be made in the future’ (Stone, Reference Stone2016). Subsequently, the ‘lesson learning process’ prompted by the Iraq Inquiry and pursued across the National Security Community was described in a report from the UK National Security Advisor as ‘focussed on ensuring that we have the right systems, capabilities and cultures to support cross-Government decision-making on national security issues’ (United Kingdom [UK] Parliament, 2017). This included examining and refining, inter alia, ‘the machinery of Government’, deliberative practices, culture within the National Security Community, ‘knowledge management in complex situations’, and flows of information and advice within decision-making structures, as well as ensuring the existence of ‘mechanisms for reviewing and adjusting’ strategies ‘when things go wrong or risks increase’ (United Kingdom [UK] Parliament, 2017). In other words, the learning process involved deliberate, internal, structural change in response to the failures and corresponding lessons set out in the Iraq Inquiry.

There is no suggestion here that a state will necessarily realise its potential to learn in a given context. Nor are we claiming that the Iraq Inquiry produced a perfect example of enduring, structural change following institutional self-reflection, or one that will preclude future violations of jus ad bellum principles. The process surrounding the Iraq Inquiry does, however, display both requisite stages of institutional learning – critical self-reflection and corresponding deliberate, internal reform at the level of the organisation as a whole – thereby usefully demonstrating what institutional learning can look like. Moreover, and importantly, it shows that through such a process of institutional learning the state can engage with, and potentially internalise, evolving international norms, such as jus ad bellum principles of restraint. As shown by the example of the Iraq Inquiry, the state is able to reflect on its prior conduct in relation to such norms, evaluate the extent to which it has deviated from them, and assess why this has occurred. It is also able to deliberate over and determine the importance of complying with these external expectations in terms of its own goals and interests (whether these be to maintain standing in the international community, achieve consistency between its espoused values and actions, or avoid censure). This scrutiny can then reveal ways of avoiding future deviations by, for example, embedding strategies for ensuring adherence to jus ad bellum principles of restraint in the state’s policies, decision-making procedures, and practices.

5. The impact of AI-enabled systems on institutional learning and the exercise of restraint

Yet, what happens when AI-enabled systems – the ‘thinking machines’ described by Alan Turing (Reference Turing1950) 75 years ago – become an increasingly integral part of the ‘living machines’ of formal organisations in this high-stakes context of resort-to-force decision making? Specifically, how would such systems affect the state’s capacity for institutional learning? And how could this, in turn, affect the state’s propensity to be influenced by the very international norms of restraint that we have suggested AI-enabled decision-support systems could help states navigate when they contemplate war?

5.1 “Borgs in the org”

Not only will AI-driven systems increasingly infiltrate resort-to-force decision making, but they will become ubiquitous in these processes. ML techniques will conceivably be drawn on to process mass data troves into predictive and diagnostic models, rendering complex and dynamic factors into discernible facts that inform reasoned decisions. States will enrol these tools to anticipate acts of aggression across territorial borders (and, potentially, genocidal tendencies within them), to assess the feasibility of courses of action that represent alternatives to war, and to predict both the consequences of the resort to force and risks of inaction. Importantly, when we envision this widespread employment of AI-driven decision-support systems to consider key strategic variables in the resort to force, we are not imagining their intervening in the state-level decision-making process at only one level, at a single point in time – simply offering an isolated recommendation, on demand, before an executive body decides to wage war (or not). Rather, such systems will contribute to analyses at multiple points in time and in the decision-making process, informing the individual and collective deliberations of high-ranking cabinet and administration officials, military leaders, and legislators, and exerting an iterative influence as outputs are considered and incorporated up chains of command and decision-making hierarchies.Footnote 20

In addition to their ubiquity, AI-enabled decision-support systems used in resort-to-force deliberations will be embedded in institutional structures. We should not expect such systems to only take the form of free-standing automated assistants that we can bring into, or exclude from, collective decision making as we see fit. In a cautionary article on the possible future influence of AI on the decision to engage in nuclear war, Ross Andersen (Reference Andersen2023, p. 12) observes that, as yet, ‘[n]o one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff’. But if that is all that we are looking for, we will miss AI’s admittance into the situation room under stealthier circumstances that require no such invitation. In addition to being able to infiltrate resort-to-force decision making as discrete, automated variations on human assistants and deliberators, offering counsel and counterarguments to state leaders and military, legislative, and executive bodies contemplating armed conflict, they will also become an inextricable part of the institutional structure itself, embedded in processes and procedures, and therefore an unavoidable part of collective decision making. Luciano Floridi (Reference Floridi2023) colourfully describes ‘the emergence of LEGO-like AI systems, working together in a modular and seamless way, with LLMs [large language models] acting as an AI2AI kind of bridge to make them interoperable, as a sort of “confederated AI”’.Footnote 21 This image of a connected, ‘confederated’ association of AI-driven systems gestures towards the way such systems will constitute and transform the formal organisations that rely on them. We label this anticipated ubiquity and embeddedness of AI-enabled systems in the state’s decision-making procedures and organisational structure as the ‘phenomenon of “Borgs in the org”’.

‘Borgs in the org’ is a felicitous phrase borrowed from Griffith et al. (Reference Griffith, Northcraft, Fuller, Hodgkinson and Starbuck2008). ‘The Borg’ are a creation of popular science fiction, depicting a species of ‘cyborgs’ (cybernetic organisms), or amalgamations of artificial and organic life forms.Footnote 22 Our adoption of this allusion highlights the prospect of formal organisations that are increasingly constituted by AI-enabled systems acting almost imperceptibly alongside, and in collaboration with, human actors. But rather than addressing how this ‘machine-human teaming’ either affects individual human decision-makers or influences particular outcomes when it comes to resort-to-force decision making – both crucial questions in themselves – we are interested in how the infiltration of AI-enabled systems will change the nature of the organisation itself, particularly its capacity for institutional learning, and what this could mean for the state’s propensity to exercise restraint.

5.2 Four obstacles to institutional learning

The ubiquitous and embedded AI-enabled systems that we anticipate being employed to aid state-level resort-to-force decision making will undoubtedly augment the capacities that define the institutional agency of the state – capacities to access and process information and reach collective decisions that are irreducible to the positions of some or all of the state’s human constituents.Footnote 23 Yet, in doing so, we suggest that these systems will also transform the state’s organisational and decision-making structures in subtle but significant ways that undermine its potential for reflexivity and corresponding internal reform – in other words, its capacity for institutional learning. We propose four interrelated factors that will contribute to such a transformation.

5.2.1 Disrupted deliberative structures

First, introducing algorithmic systems into complex decisions about whether and when to wage war would disrupt existing deliberative structures. As already noted, in relation to the AI-driven systems that we are interested in here, human actors – acting individually or collectively – would remain the final arbiters of the resort to force. Yet, the decision-making pathways and processes leading to that verdict would change.

This disruption has been heralded by representatives of the AI-based national security consultancy Rhombus Power as a necessary response to the opportunities delivered by evolving ML technologies. ‘In a world where data can help us see and anticipate with unprecedented clarity, we must leverage our new capabilities and empower decision makers by reorganising processes [emphasis added] designed around the inputs of human beings’ (McChrystal & Roy, Reference McChrystal and Roy2023). Our current structures, they suggest, cannot accommodate these far-reaching changes. ‘The U.S. government’s systems for handling information and making national security decisions were perfected for 20th-century situation rooms, where the best brains deliberated face-to-face around a table, not for 21st-century data and network technologies’ (McChrystal & Roy, Reference McChrystal and Roy2023). They conclude that ‘[n]ow we need not just a bigger table and situation room – but their digital versions’ (McChrystal & Roy, Reference McChrystal and Roy2023). Returning to Andersen’s (Reference Andersen2023) quip about AI not being invited to a meeting of the Joint Chiefs of Staff, this declaration reiterates that the issue is not whether we usher a synthetic agent into the situation room, but, rather, that AI-enabled systems will fundamentally change the way decisions are made at a structural level.

As strategic and operational decision-making processes are altered and information flows are re-routed, existing chains of command, along with lines of accountability, will inevitably be circumvented or severed. Such disjunctures may be temporary. Chains of command and lines of accountability may be re-established and re-mapped over time (within the constraints of the three factors to follow). Nevertheless, at least in the short term, as established deliberative structures are upended – or abandoned – so will be the processes of self-reflection and review that accompany them. Such processes are requisite to institutional learning.

5.2.2 Obscured decision-making processes

Obscured decision-making processes further challenge institutional learning when states involve AI in resort-to-force deliberations. Even if we respond to the previous obstacle by re-mapping flows of information and decision-making processes to accommodate algorithmic interventions, and re-establish corresponding chains of command and lines of accountability, a significant obstacle to institutional self-reflection and review remains if decisions cannot be audited. ‘Algorithmic opacity’, meaning simply that ML processes are frequently opaque and unpredictable, would effectively obscure key points on the decision pathway.

It is true that algorithmic outputs have a documentable quality, which means that organisations can point to these outputs as definitive decision-making factors when reflecting on past action or inaction.Footnote 24 Although this may seem like a boon for transparency and organisational self-knowledge, it is, in fact, a veneer of access covering structural obscurity. Algorithmic outputs are notoriously difficult to explain, even for those who develop the models. This is especially the case for models that operate through ML techniques whereby the data processing procedures and parameters of interest are not pre-set but produced through an ongoing interaction between mathematical formulas and dynamic datasets. This is the ‘black box’ problem of algorithmic systems (Burrell, Reference Burrell2016; Burrell & Fourcade, Reference Burrell and Fourcade2021; Pasquale, Reference Pasquale2015).

Of course, states’ deliberations on the resort to force are hardly transparent even without algorithmic interventions – a point highlighted by Ashley Deeks (Reference Deeks2024b, Reference Deeks2025a) in her incisive account of the ‘double black box’ of AI-enabled national security decision making. Nevertheless, while aspects of high-level decision making may have always been selectively accessible, states could still execute institutional oversight, reflection, and review – as demonstrated by the Iraq Inquiry. This potential is diminished if key elements of the decision-making process are delegated to AI.

When organisations seek to reflect upon their past decisions about the resort to force, algorithmic indicators may be at once an outsized factor and an impenetrable data point. This not only impedes the reflexive element of institutional learning, but, in doing so, also poses challenges to subsequent adjustment, as it is unclear which factors in the decision-making process went well or poorly and thereby difficult to determine whether and how to implement remedial organisational change. In sum, institutional learning requires knowledge of how courses of action (and inaction) were determined and justified. The very nature and inscrutability of ML models can create barriers to such knowledge.

5.2.3 Institutional automation bias

A third obstacle to institutional learning that threatens to accompany AI-assisted resort-to-force decision making is what we call institutional automation bias. ‘Automation bias’ refers to our human tendency to ‘disregard or not search for contradictory information in light of a computer-generated solution’ (Cummings, Reference Cummings2006, 25; see also Mosier & Manzey, Reference Mosier, Manzey, Mosier and Manzoor2019; Skitka et al. Reference Skitka, Mosier and Burdick1999). It is generally understood in terms of the interactions between individual human actors and machines, although some studies have also demonstrated automation bias in teams (see e.g., Mosier & Fischer, Reference Mosier and Fischer2010; Mosier et al. Reference Mosier, Skitka, Dunbar and McDonnell2001; Skitka et al. Reference Skitka, Mosier, Burdick and Rosenblatt2000). Our assumption of institutional agency leads us to suggest that it is possible to take this tendency to yet another level of analysis and talk about automation bias of the organisation as a whole. There are two ways that this might be conceived.

Returning to our point about the ubiquity of AI-enabled systems that will influence resort-to-force decision making, deference to machine-generated outputs will necessarily affect individuals (and teams) at multiple points in decision-making structures. This reality suggests a compounding effect with respect to automation bias as AI-generated analyses and recommendations iteratively feed into deliberative processes as information travels up chains of command and decision-making hierarchies (Erskine, Reference Erskine2024b, p. 181). This would certainly impact the state’s choices when it comes to the resort to force. Nevertheless, it would be an amplified result of what remain instances of individual automation bias.

We might, alternatively, consider automation bias at the genuinely corporate level of the state – in other words, in a way not reducible to the machine-deference that afflicts its individual constituents. This would transpire if our human tendency to defer, without question, to computer-generated outputs becomes embedded in the state’s decision-making procedures and culture with respect to deliberations over the resort to force. This could happen, for example, through formal procedures that reduce the time allocated for checks on particular automated predictions or diagnoses that feed into executive decision making, thereby contributing to a culture in which such checks are viewed as redundant and, perhaps, removed altogether.Footnote 25

Both the weak and strong senses of institutional automation bias would eschew the robust processes of review requisite to institutional learning. Unlike the obstacle posed by algorithmic opacity, this eschewal would not be technical in nature. Rather, critical reflection would be impeded by the misperception that AI-driven predictions and recommendations do not require scrutiny, leading to the impossibility of such review when the processes necessary for it become structurally precluded. Even when institutional learning is prompted by external factors such as international normative appeals and censure in response to alleged violations of moral and legal responsibilities, as is our focus here, it still relies on internal processes of review and reform. Institutional learning is necessarily reflexive, not merely reactive (Erskine, Reference Erskine2020, p. 507). If such internal processes of review and reform are deemed unnecessary and structurally overridden, institutional learning becomes impossible.

5.2.4 Magnified institutional conservativism

The fourth and final obstacle to institutional learning that would accompany the infiltration of AI-enabled systems into state-level resort-to-force decision making is what we call magnified institutional conservatism. Procedures and practices, rules and rituals are institutionalised in formal organisations. As such, these bodies already display some resistance to change. In other words, there is an inherent conservatism when it comes to formal organisations. This can be beneficial. With this resistance to change comes continuity, some degree of stability, and a corporate identity that persists over time despite the arrivals and departures of individual members. Change, then, takes a concerted effort, formal processes of review and reform, and a forward-looking vision of how the organisation may better achieve its goals (whatever these may be). These are all elements of institutional learning. However, when ML-generated analyses are incorporated into the decision making of formal organisation such as states, resistance to change is magnified.

ML-generated analyses are necessarily backward-looking. They reflect simplified, curated, and archived accounts of what we have heretofore known, valued, and acted upon. This stems from a basic function of ML processes, whereby the present and future are interpreted by and predicted through training data that derives from the past (Marcinowski, Reference Marcinowski2022). This, in turn, can have a stagnating effect on the state’s capacity for change. Deliberate change is a fundamental part of institutional learning. ML processes stall such change, like a magnet to the past that impedes alternative futures. Shannon Vallor’s (Reference Vallor2024) metaphor of ‘the AI mirror’ is useful here, as organisations that rely on ML-driven systems become entrained to datafied versions of social forces and dynamics that are not only moored to historical patterns but reproduce those patterns in their thinnest and most basic form. Algorithmic outputs are modelled on existing logics and assumptions, erecting barriers against novel interpretations, the recognition of new circumstances, or proposals for action that have no historical referent. This renders avenues towards change algorithmically invisible, illegible, and thereby inconceivable.

5.3 The risk of institutional atrophy

In sum, the state’s anticipated reliance on AI-enabled systems to fulfil particular steps in resort-to-force decision making will lead to a number of interlinked transformations. Specifically, interventions by these decision-support tools will be accompanied by (at least temporarily) disrupted deliberative structures and chains of command, the occlusion of crucial steps in the decision-making process, deference to computer-generated outputs institutionalised in the state’s practices and procedures, and a particular variation on conservatism that ties the state’s planned courses of action to what has already happened and what states have traditionally done. Together, these factors undermine the state’s ability to reflect on and understand its decisions, corresponding actions, and their consequences, and to evaluate and correct the policies, procedures, and practices that led to these choices, actions, and outcomes. In other words, these factors impede the state’s potential for institutional learning. We might think of this as a form of ‘institutional atrophy’: a gradual decline and weakening, through disuse and neglect, of particular capacities that accompany institutional agency. In this case, the state’s capacities for critical self-reflection and ensuing internal reform risk deterioration due to the supporting role given to AI-driven systems.

As we are suggesting that this institutional atrophy poses a risk to compliance with international norms of restraint, it is necessary to return to the relationship between institutional learning and the state’s responsiveness to such norms. We have already suggested that the process of institutional learning – encompassing self-reflection and internal reform – allows the state to engage with, and potentially internalise, evolving international norms, such as jus ad bellum principles of restraint. It is important to acknowledge that even though this capacity for institutional learning sensitises the state to both external appeals to such standards and censure for their violation, this does not guarantee that the state will embrace the norms in question. Instead, the state could respond to these external prompts with critical self-reflection, a justification of its deviation from prevailing norms, and a reinforcement of its existing policies, procedures, and practices.

We make the modest claim that, although the state’s capacity for institutional learning is not a sufficient condition for internalising norms of restraint, it is a necessary one. This status as a requisite condition cautions against impeding institutional learning if we hope to foster adherence to such norms. Moreover, even in those instances in which deviations from external standards are defended rather than corrected as a result of institutional learning, this capacity nevertheless allows the state to engage with, respond to, be influenced by, and even shape international norms such as jus ad bellum responsibilities of restraint. (After all, well-defended deviations can contribute to new international norms.) The institutional atrophy that we have outlined here would prevent states from both adopting and contributing to evolving international standards of restraint in the resort to force – with potentially catastrophic consequences.

6. Conclusion

In an important sense, states themselves could already be considered the metaphorical ‘Borgs’ alluded to in the title of this article – even before the anticipated infiltration by AI-enabled systems. After all, the state is a ‘living machine’ (to return to Weber’s vivid description of bureaucratic organisation), constituted by both individual human actors and the institutional structures that frame and channel, coordinate and transform their decisions and actions. This combined human element and machinic element (in the sense of organisational apparatus) define the institutional agency of the state. Yet, our borrowed phrase, ‘Borgs in the org’, refers to something distinct, something more difficult to theorise (although we need to try), and something new. We use it to refer to the proliferation of specifically AI-driven systems that aid human decision making within formal organisations like the state. This pervasive presence of AI-driven systems both changes (enhances and limits) the human element of the state and (as is our focus here) alters its very machinery in the sense of its organisational and decision-making structures. By trying to conceptualise the latter in the context of the state’s decision to go to war, we have begun to consider the implications of this ‘phenomenon of “Borgs in the org”’ in practice. We have also shed light on a hitherto unexplored risk that AI-enabled systems pose for the exercise of restraint in war.

If we look to the vast reliance on AI-enabled decision-support systems in almost every domain of human judgement and deliberation, it appears inevitable that such systems will increasingly also infiltrate state-level decision making on the resort to force (beyond the areas of intelligence collection and analysis where they are already making an indirect contribution). Although these tools might be used to help states navigate the international norms of restraint that govern the resort to force, we have suggested that this development could also, ironically, undermine the state’s propensity to abide by them. Specifically, we have argued that this development threatens to throw a spanner in the machinery of the state when it comes to the integrated capacities for self-reflection and reform that can be prompted by external appeals and censure.

Weaving AI-driven decision-support systems into the existing structural fabric of states – thereby creating what we have called the ‘phenomenon of “Borgs in the org”’ – threatens to weaken states’ learning capacities through four inter-related factors. Namely, we have proposed that aspects of the state’s deliberative processes will variously be disrupted, obscured, excused from scrutiny, and trapped by the past. This ‘institutional atrophy’, in turn, risks preventing the state from engaging with, and potentially internalising, international norms of restraint, as well as responding to censure in the wake of alleged transgressions. In short, it will dull the state’s responsiveness to processes of socialisation within international society, thereby increasing the likelihood of the state’s non-compliance with international norms of restraint.

Such non-compliance on the part of one state thus afflicted would not affect the strength or standing of these international norms of restraint themselves – and is even compatible with their fervent espousal by individual actors within it. Nevertheless, widespread institutional atrophy amongst members of the society of states would involve the erosion not only of discrete states’ responsiveness to external social pressure and censure, but also of the intersubjective understanding and dynamic consensus on shared values that define, and give power to, international norms. In this sense, the institutional atrophy that we have identified here also has the potential to lead to a form of international norm decay – along with the broader geo-political implications that would accompany it.Footnote 26

Our aim in this article has been to provide an outline of an unacknowledged risk that we should expect, and prepare for, as AI-driven tools increasingly and more directly guide and support the state’s decision making on the resort to force. We have also sought to provide a unique conceptual framework for understanding and interrogating this proposed risk through our introduction of the following: the ‘phenomenon of “Borgs in the org”’; the process of ‘institutional learning’ that we suggest is so important to the state’s navigation of international norms of restraint; factors that impede institutional learning when the state is constituted and transformed by AI-enabled decision-support systems; and the ensuing problem of ‘institutional atrophy’, which diminishes the state’s sensitivity to external appeals for change and charges of wrongdoing. Our provocation raises a host of questions, which we hope will be addressed in future work with the aid of this preliminary conceptual framework. Perhaps most urgently: Are there structural, social, and technical ways of protecting, or reconstructing, processes of self-reflection and internal reform within the state that can accompany its anticipated reliance on AI-driven systems, so that the state’s capacity for institutional learning endures?

Funding statement

This project was supported by the Australian Government through a Strategic Policy Grant from the Department of Defence.

Competing interests

The authors declare none.

Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU). She is recipient of the International Studies Association’s 2024-25 Distinguished Scholar Award in International Ethics, Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University, and Chief Investigator of the ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ Research Project, funded by the Australian Government through a grant by Defence.

Jenny L. Davis is the Gertrude Conaway Vanderbilt Endowed Chair and Professor of Sociology at Vanderbilt University, an Honorary Professor of Sociology at the Australian National University (ANU), and a 2024-2026 Non-Resident Fellow at the Center for Democracy and Technology.

Footnotes

This is one of fourteen articles published as part of the Cambridge Forum on AI: Law and Governance Special Issue, AI and the Decision to Go to War, guest edited by Toni Erskine and Steven E. Miller. Earlier versions of this article were presented at: the 2nd Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making Workshop at The Australian National University in Canberra, Australia, 24 July 2024; the International Ethics Research Group meeting in Canberra, Australia, 12 February 2025; the British International Studies Association (BISA) conference in Belfast, UK, 20 June 2025; the Global Politics of AI Research Seminar Series at the Centre for the Future of Intelligence (CFI), University of Cambridge, UK, 26 June 2025; and, the Oceanic Conference on International Studies (OCIS) in Sydney, Australia, 10 July 2025. We are very grateful to the other workshop participants and the audiences at each presentation for their valuable feedback. We would also like to thank: Ashley Deeks, Ned Dobos, John Dryzek, Luke Glanville, Cian O’Driscoll, Jonathan Pickering, Mitja Sienknecht, Ana Tanasoca, Ben Zala, and two anonymous reviewers for their incisive written comments on previous iterations of this argument; Syed Faraz, Cecilia Jacob, Paul Lushenko, Steven E. Miller, and Osonde Osoba for valuable discussions on specific points; Tuukka Kaikkonen and Hanh Nguyen for support with formatting and references; and the Australian Department of Defence for generously funding this work through a Strategic Policy Grant. The phrase “Borgs in the org” is borrowed from Griffith et al. (2008) and explained below.

1 Pioneering exceptions are Deeks et al. (Reference Deeks, Lubell and Murray2019), Wong et al. (Reference Wong, Yurchak, Button, Frank, Laird, Osoba and Bae2020), and Horowitz and Lin-Greenberg (Reference Horowitz and Lin-Greenberg2022).

2 This distinction is important, including in relation to Paul Lushenko’s (Reference Lushenko2025) research findings that the use of AI in strategic decision making does not negatively affect the commitment of elite military officers to norms of restraint. Lushenko notes that these findings contradict claims that the use of AI-driven systems in this context will ‘erode’ norms of restraint. This may be true of claims that suggest that reliance on such systems would negatively impact the officers’ belief that these norms are significant and binding. However, notably, Lushenko’s findings are not at odds with the two risks to restraint proposed in the previous paper (Erskine, Reference Erskine2024b). In both cases, the erosion of international norms of restraint being hypothesised would be a consequence of inadvertent non-compliance rather than individuals’ reduced commitment to them. Such non-compliance would be perfectly compatible with an explicitly articulated and genuinely internalised commitment to international norms of restraint by the individuals Lushenko has surveyed. Likewise, we suggest in what follows that a third case is also compatible with Lushenko’s findings as our proposal is that the use of AI-enabled decision-support systems risks affecting institutional responsiveness, rather than individual commitment, to such norms.

3 To a limited extent, this anticipated future has already arrived, with such AI-driven tools indirectly influencing state-level decisions on the resort to force by contributing to intelligence collection and analysis (Deeks et al., Reference Deeks, Lubell and Murray2019, pp. 2, 6; Suchman, Reference Suchman2023; Logan, Reference Logan2024).

4 By ‘AI general’ we mean an autonomous system that could substitute for political and military leaders by both making and implementing strategic decisions. We borrow this label from Lushenko (Reference Lushenko2023). See also Hunter and Bowen (Reference Hunter and Bowen2023). Sparrow and Henschke (Reference Sparrow and Henschke2023) invoke a comparable metaphor in the ‘minotaur’ – AI that would substitute for humans in executive roles in the context of warfighting. While their focus is future ‘unmanned-manned teams’ on the battlefield, their model could also describe handing over command to machines for strategic, resort-to-force decision making.

5 This not an exhaustive list. Jus ad bellum principles within the just war tradition also include the principles of right intention and legitimate authority. However, as these address the motivation and status of the entity that would wage war, when it comes to the state’s resort-to-force decision making, they are not conditions that the predictive and diagnostic functions of AI-enabled decision-support systems would be required, or suited, to help analyse. Rather, our focus, as discussed below, is on those jus ad bellum principles that demand either complex ex ante judgements under conditions of uncertainty or the interpretation of multiple data points to determine impending or already-occurring actions.

6 These principles are generally identified in the following cases, respectively: the International Court of Justice (ICJ) judgement on the case Military and Paramilitary Activities in and against Nicaragua (Nicaragua v. United States of America) (1986) and the ICJ’s Advisory Opinion on the “Legality and Threat of Use of Nuclear Weapons” (1996). For discussions, see Kreß and Lawless (Reference Kreß and Lawless2020) and O’Meara (Reference O’Meara2021).

7 Provisions are also in place for the multilateral use of force when the UN Security Council recognizes a threat to international peace and security (United Nations, 1945, Chapter VII).

8 Criteria for permissible anticipatory self-defence are often taken from the statement of Daniel Webster in the Caroline case of 1842, according to which there must be “a necessity of self-defence …instant, overwhelming, leaving no choice of means, and no moment of deliberation” (quoted in Walzer, 1977/Reference Walzer2015, p. 74). For an overview of anticipatory self-defence under international law, see Deeks (Reference Deeks and Weller2015).

9 For example, UN resolution 1973 invoked the ‘responsibility to protect’ in providing legal authorization for the 2011 intervention in Libya. For incisive historical overviews of both normative arguments and state practice surrounding ‘humanitarian intervention’ and ‘the responsibility to protect’, see Glanville (Reference Glanville2014, pp. 171–212; Reference Glanville2021, pp. 65–71, 133–167). We are indebted to Luke Glanville for illuminating discussions on these themes.

10 Although not our focus here, intergovernmental organisations (IGOs) such as the North Atlantic Treaty Organization (NATO) and the United Nations (UN) are also collective ‘moral agents of restraint’ in war.

11 We are grateful to Syed Faraz, business development manager at Rhombus, for alerting us to this case and discussing the technology behind it with the first author (Erskine) during the Responsible AI in the Military Domain (REAIM) Summit in Seoul, South Korea, September 2024.

12 Some IGOs, such as NATO and the UN, can also wage war. In the case of the UN, for example, as well as authorising military action led by member states or regional organisations in cases of collective self-defence, intervention for human protection purposes, or peace operations, the UN can also engage in military action in the sense of being in direct control of troops and tactics in the context of UN peace operations. In both cases, the UN is charged with making the decision to resort to force. While IGOs are beyond the scope of this article, we suggest that the notions of ‘institutional agency’ and ‘institutional learning’ that we explore here also apply to IGOs such as the UN (see Erskine, Reference Erskine2004, Reference Erskine, Bellamy and Dunne2016, Reference Erskine2020) – as do the risks that we argue are associated with reliance on AI-driven systems.

13 This section draws on the account of ‘institutional agency’ and specifically ‘institutional moral agency’ articulated in Erskine (inter alia Reference Erskine2014, Reference Erskine and Miller2024a).

14 For philosophical works that defend the ‘corporate’, ‘institutional’, or ‘group’ agency of formal organisations such as states, see for example: French (Reference French1984), O’Neill (Reference O’Neill1986), Erskine (Reference Erskine2001), and List and Pettit (Reference List and Pettit2011).

15 Emphasis in the original.

16 There is a conceptual distinction that can be made between agency and moral agency that is not necessary for our argument here. While bodies are ‘agents’ if they are capable of some degree of purposive action, as noted above, the ‘moral agents’ to which any coherent judgement of specifically moral responsibility must be directed necessarily clear a higher bar by also possessing more sophisticated capacities for deliberation and self-reflection (see Erskine, Reference Erskine2024a, pp. 538-9, n.14) The structured institutions that we address here meet the qualifying criteria for both.

17 Particularly prominent in the 1990s, there are a number of important studies within the discipline of International Relations (IR) that make explicit reference to ‘learning’ in relation to corporate actors, including states and IGOs. See, for example: Ikenberry (Reference Ikenberry, Suleiman and Waterbury1990), Haas (Reference Haas1990), Levy (Reference Levy1994), Finnemore (Reference Finnemore1996), Wendt (Reference Wendt1999), and, more recently, Adler (Reference Adler2019). However, these treatments do not account for the integrated process of critical self-reflection and deliberate internal change at the level of organisation as a whole that we see as necessary for genuinely institutional learning. For our purposes here, we follow the account of ‘institutional learning’ proposed in Erskine (Reference Erskine2020).

18 We are very grateful to Jonathan Pickering for highlighting the need to make this distinction explicit.

19 These external prompts are reminiscent of the practices of ‘socialization’ and ‘moral proselytizing’ theorised, respectively, by Finnemore (Reference Finnemore1996, p. 5) and Price (Reference Price1998, p. 620) in the context of their pioneering studies of norm promotion and diffusion within IR. They provide accounts of how states’ interests, values, and behaviour can be shaped by the lobbying efforts of other actors within international society. Our focus here is on the internal process of self-reflection and reform that these practices have the potential to generate.

20 In highlighting ‘high-ranking cabinet and administration officials’, ‘military leaders’, and ‘legislators’, we are following Elizabeth Saunders’ (Reference Saunders2024, p. 7) identification of these three groups of elected and unelected ‘elites’ as the most likely to have ‘systematic influence’ within a democracy when it comes to decisions on the resort to force. Saunders (pp. 4–5) defines these ‘elites’ as ‘those with access to information and the decision-making process’ with respect to decisions on war and peace – precisely those whose deliberations, we suggest, would likely be informed by AI-enabled decision-support systems. See also Sienknecht (Reference Sienknecht2025) for an overview of decision-making processes on the resort to force within democracies.

21 Floridi (Reference Floridi2023, footnote 10) credits Vincent Wang with the phrase ‘confederated AI’.

22 ‘The Borg’ were introduced as the main alien antagonist in the television series Star Trek: The Next Generation.

23 In other words, this integration of AI-enabled systems does not, we suggest, undermine the institutional agency of the state. To the contrary, these systems contribute incrementally to a structure that makes state action even more difficult to explain solely in terms of the behaviour of individual human actors. Although our focus here is on how such AI-enabled systems thereby constitute and transform the state in ways that affect other capacities, what the ubiquity and embeddedness of AI-enabled systems in formal organisations (the ‘phenomenon of “Borgs in the org”’) means for philosophical accounts of institutional/group/corporate agency warrants attention in future work.

24 We are grateful to Osonde Osoba for highlighting this point.

25 Such an example of institutional automation bias could involve formal procedures for human oversight of AI-driven early-warning threat assessments of incoming nuclear attacks, of the type discussed by Ben Zala (Reference Zala2024, pp. 157-8). In such a case, the time granted to a human operator to check AI-generated indications of an incoming strike – before a warning is relayed to the leader or executive body charged with deciding on a pre-emptive resort to force – may be increasingly constrained until effective oversight within this intelligence gathering, analysis, and communication chain becomes impossible.

26 This potential collateral effect of (widespread) institutional atrophy on international norms is distinct from but comparable to the incidental damage to international norms posed by the ‘risk of misplaced responsibility’ and the ‘risk of predicted permissibility’ referred to at the beginning of this article. See Erskine (Reference Erskine2024b, 182, 184).

References

Adler, E. (2019). World ordering: A social theory of cognitive evolution. Cambridge University Press. https://doi.org/10.1017/9781108325615CrossRefGoogle Scholar
Ajunwa, I. (2023). The quantified worker: Law and technology in the modern workplace. Cambridge University Press. https://doi.org/10.1017/9781316888681CrossRefGoogle Scholar
Andersen, R. (2023, June). Never give Artificial Intelligence the nuclear codes. The Atlantic, 1115.Google Scholar
Angwin, J., Mattu, S., & Kirchner, L. (2016, May 23 ). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed January 10, 2025.Google Scholar
Atkins, R. (2025, September 12 ). “Israel’s war in gaza and proportionality”. BBC. Available at: https://www.bbc.com/news/articles/cr5r76e127do, accessed 14 September 2025.Google Scholar
Bull, H. (1977). The anarchical society: A study of order in world politics. Macmillan.CrossRefGoogle Scholar
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data and Society, 3(1). https://doi.org/10.1177/2053951715622512CrossRefGoogle Scholar
Burrell, J., & Fourcade, M. (2021). The society of algorithms. Annual Review of Sociology, 47(1), 213237. https://doi.org/10.1146/annurev-soc-090820-020800CrossRefGoogle Scholar
Byers, M. (2015). International law and the responsibility to protect. In Thakur, R. & Maley, W. (Eds.), Theorising the Responsibility to Protect (pp. 101124). Cambridge University Press. https://doi.org/10.1017/CBO9781139644518.006CrossRefGoogle Scholar
Calvano, E., Calzolari, G., Denicolò, V., & Pastorello, S. (2020). Artificial intelligence, algorithmic pricing, and collusion. American Economic Review, 110(10), 32673297. https://doi.org/10.1257/aer.20190623CrossRefGoogle Scholar
Chilcot, S. J. (2016). The Report of the Iraq Inquiry: Executive Summary. (HC 264). Available at: https://assets.publishing.service.gov.uk/media/5a80f42ced915d74e6231626/The_Report_of_the_Iraq_Inquiry_-_Executive_Summary.pdf, accessed 2 October 2024.Google Scholar
Cummings, M. L. (2006). Automation and accountability in decision support system interface design. Journal of Technology Studies, 32(1), 2331.CrossRefGoogle Scholar
Davis, J. L. (2024). Elevating humanism in high-stakes automation: Experts-in-the-loop and resort-to-force decision making. Australian Journal of International Affairs, 78(2), 200209. https://doi.org/10.1080/10357718.2024.2328293CrossRefGoogle Scholar
Deeks, A. (2015). Taming the doctrine of pre-emption. In Weller, M. (Ed.), The Oxford Handbook of the Use of Force in International Law (pp. 661678). Oxford University Press.Google Scholar
Deeks, A. (2024a). Delegating war initiation to machines. Australian Journal of International Affairs, 78(2), 148153. https://doi.org/10.1080/10357718.2024.2327375CrossRefGoogle Scholar
Deeks, A. (2024b, August 14 ). The double black box: AI inside the national security ecosystem. Just Security. https://www.justsecurity.org/98555/the-double-black-box-ai-inside-the-national-security-ecosystem/, accessed November 1, 2024.Google Scholar
Deeks, A. (2025b). State responsibility for AI mistakes in the resort to force. AI and the Decision to Go to War Special Issue, Cambridge Forum on AI: Law and Governance, 1, 112.Google Scholar
Deeks, A., Lubell, N., & Murray, D. (2019). Machine learning, artificial intelligence, and the use of force by states. Journal of National Security Law and Policy, 10(1), 125.Google Scholar
Deeks, A. (2025a). The Double Black Box: National Security, Artificial Intelligence, and the Struggle for Democratic Accountability. Oxford University Press.CrossRefGoogle Scholar
Erskine, T. (2001). Assigning responsibilities to institutional moral agents: The case of states and quasi-states. Ethics & International Affairs, 15(2), 6785. https://doi.org/10.1111/j.1747-7093.2001.tb00359.xCrossRefGoogle Scholar
Erskine, T. (2004). “Blood on the UN’s hands”? Assigning duties and apportioning blame to an intergovernmental organisation. Global Society, 18(1), 2142. https://doi.org/10.1080/1360082032000173554CrossRefGoogle Scholar
Erskine, T. (2010). Kicking bodies and damning souls: The danger of harming “innocent” individuals while punishing “delinquent” states. Ethics & International Affairs, 24(3), 261285. https://doi.org/10.1111/j.1747-7093.2010.00267.xCrossRefGoogle Scholar
Erskine, T. (2014). Coalitions of the willing and responsibilities to protect: Informal associations, enhanced capacities, and shared moral burdens. Ethics & International Affairs, 28(1), 115145. https://doi.org/10.1017/S0892679414000094CrossRefGoogle Scholar
Erskine, T. (2016). Moral agents of protection and supplementary responsibilities to protect. In Bellamy, A. J. & Dunne, T. (Eds.), The Oxford handbook of the responsibility to protect (pp. 167185). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198753841.013.10Google Scholar
Erskine, T. (2020). Intergovernmental organizations and the possibility of institutional learning: Self-reflection and internal reform in the wake of moral failure. Ethics & International Affairs, 34(4), 503520. https://doi.org/10.1017/S0892679420000647CrossRefGoogle Scholar
Erskine, T. (2024a). AI and the future of IR: Disentangling flesh-and-blood, institutional, and synthetic moral agency in world politics. Review of International Studies, 50(3), 534559. https://doi.org/10.1017/S0260210524000202CrossRefGoogle Scholar
Erskine, T. (2024b). Before algorithmic Armageddon: Anticipating immediate risks to restraint when AI infiltrates decisions to wage war. Australian Journal of International Affairs, 78(2), 175190. https://doi.org/10.1080/10357718.2024.2345636CrossRefGoogle Scholar
Erskine, T., & Carr, M. (2016). Beyond “quasi-norms”: The challenges and potential of engaging with norms in cyberspace. In Osula, A.-M. & Röigas, H. (Eds.), International cyber norms: Legal, policy & industry perspectives, (pp. 87109). NATO CCD COE Publications. https://ccdcoe.org/library/publications/international-cyber-norms-legal-policy-industry-perspectives/Google Scholar
Erskine, T., & Miller, S. E. (2024a). AI and the decision to go to war: Future risks and opportunities. Australian Journal of International Affairs, 78(2), 135147. https://doi.org/10.1080/10357718.2024.2349598CrossRefGoogle Scholar
Erskine, T., & Miller, S. E. (Eds) (2024b). Anticipating the future of war: AI, automated systems, and resort-to-force decision making [Special issue]. Australian Journal of International Affairs, 78 2. https://doi.org/10.1080/10357718.2024.2349598Google Scholar
EU Says “Foundations” Laid for Ukraine War Tribunal. 2025. The Defense Post, 5 February. Available at: https://thedefensepost.com/2025/02/05/eu-ukraine-war-tribunal/, accessed 1 June 2025.Google Scholar
Finnemore, M. (1996). National interests in international society. Cornell University Press. https://doi.org/10.7591/9781501707384CrossRefGoogle Scholar
Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy and Technology, 36(15). https://doi.org/10.1007/s13347-023-00621-yCrossRefGoogle Scholar
French, P. A. (1984). Collective and corporate responsibility. Columbia University Press. https://doi.org/10.7312/fren90672CrossRefGoogle Scholar
Frost, M. (1996). Ethics in international relations: A constitutive theory. Cambridge University Press.CrossRefGoogle Scholar
Glanville, L. (2014). Sovereignty & the responsibility to protect: A new history. University of Chicago Press.Google Scholar
Glanville, L. (2021). Sharing responsibility: The history and future of protection from atrocities. Princeton University Press.Google Scholar
Griffith, T. L., Northcraft, G. B., & Fuller, M. A. (2008). Borgs in the org? Organizational decision making and technology. In Hodgkinson, G. P. & Starbuck, W. H. (Eds.), The Oxford handbook of organizational decision making (pp. 97115). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199290468.003.0005Google Scholar
Haas, E. B. (1990). When knowledge is power: Three models of change in international organizations. University of California Press. http://ark.cdlib.org/ark:/13030/ft6489p0mp/CrossRefGoogle Scholar
Haque, A. A. (2023, November 6 ). Enough: Self-defense and proportionality in the Israel-hamas conflict. Just Security. Available at: ttps://www.justsecurity.org/89960/enough-self-defense-and-proportionality-in-the-israel-hamas-conflict/, accessed 1 June 2025.Google Scholar
Horowitz, M. C., & Lin-Greenberg, E. (2022). Algorithms and influence artificial intelligence and crisis decision-making. International Studies Quarterly, 66(4), sqac069. https://doi.org/10.1093/isq/sqac069CrossRefGoogle Scholar
How America built an AI tool to predict Taliban attacks. (2024). The Economist. 31 July. https://www.economist.com/science-and-technology/2024/07/31/how-america-built-an-ai-tool-to-predict-taliban-attacks, accessed November 12, 2024.Google Scholar
Hunter, C., & Bowen, B. E. (2023). We’ll never have a model of an AI major-general: Artificial intelligence, command decisions, and kitsch visions of war. Journal of Strategic Studies, 47(1), 116146. https://doi.org/10.1080/01402390.2023.2241648CrossRefGoogle Scholar
Ikenberry, G. J. (1990). The international spread of privatization policies: Inducements, learning, and “policy bandwagoning”. In Suleiman, E. N. & Waterbury, J. (Eds.), The political economy of public sector reform and privatization (pp. 88110). Westview. https://doi.org/10.4324/9780429313707-4Google Scholar
International Commission on Intervention and State Sovereignty (2001). The responsibility to protect: Report of the International Commission on Intervention and State Sovereignty. International Development Research Centre. https://idrc-crdi.ca/en/book/responsibility-protect-report-international-commission-intervention-and-state-sovereigntyGoogle Scholar
Johnson, J. T. (1981). Just war tradition and the restraint in war: A moral and historical inquiry. Princeton University Press.Google Scholar
Keohane, R. O. (1988). International institutions: Two approaches. International Studies Quarterly, 32(4), 379396. https://doi.org/10.2307/2600589CrossRefGoogle Scholar
Kim, E., Bryant, D., Srikanth, D., & Howard, A. (2021). Age bias in emotion detection: An analysis of facial emotion recognition performance on young, middle-aged, and older adults. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 638644. https://doi.org/10.1145/3461702.3462609CrossRefGoogle Scholar
Kreß, C., & Lawless, R., Eds. (2020). Necessity and proportionality in international peace and security law. Oxford University Press. https://doi.org/10.1093/oso/9780197537374.002.0008CrossRefGoogle Scholar
Kumar, V., Saheb, S. S., Preeti, , Ghayas, A., Kumari, S., Chandel, J. K., … Kumar, S. (2023). AI-based hybrid models for predicting loan risk in the banking sector. Big Data Mining and Analytics, 6(4), 478490. https://doi.org/10.26599/BDMA.2022.9020037CrossRefGoogle Scholar
Lazar, S. (2020). War. In Zalta, E. N., (Ed.), The Stanford encyclopedia of philosophy, Spring 2020. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2020/entries/war/Google Scholar
Legality of the threat or use of nuclear weapons (I. C. J. Reports, pp. 226267). (1996). [Advisory opinion]. International Court of Justice. http://www.worldlii.org/cgi-bin/download.cgi/cgi-bin/download.cgi/download/int/cases/ICJ/1996/3.pdfGoogle Scholar
Levy, J. S. (1994). Learning and foreign policy: Sweeping a conceptual minefield. International Organization, 48(2), 279312.CrossRefGoogle Scholar
List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford University Press.CrossRefGoogle Scholar
Logan, S. (2024). Tell me what you don’t know: Large language models and the pathologies of intelligence analysis. Australian Journal of International Affairs, 78(2), 220228. https://doi.org/10.1080/10357718.2024.2331733CrossRefGoogle Scholar
Lushenko, P. (2023, November 29 ). AI and the future of warfare: US military officers can approve the use of AI-enhanced military technologies that they don’t trust. That’s a serious problem. Bulletin of the Atomic Scientists. https://thebulletin.org/2023/11/ai-and-the-future-of-warfare-the-troubling-evidence-from-the-us-military/Google Scholar
Lushenko, P. (2025). AI, trust, and the war-room: Evidence from a conjoint experiment in the US military. AI and the Decision to Go to War Special Issue, Cambridge Forum on AI: Law and Governance, 1, 124.Google Scholar
Marcinowski, M. (2022). Artificial intelligence or the ultimate tool for conservatism. DANUBE, 13(1), 112. https://doi.org/10.2478/danb-2022-0001CrossRefGoogle Scholar
McChrystal, S., & Roy, A. (2023, June 19 ). AI has entered the situation room. Foreign Policy. https://foreignpolicy.com/2023/06/19/ai-artificial-intelligence-national-security-foreign-policy-threats-prediction/, accessed October 10, 2024.Google Scholar
Military and paramilitary activities in and against Nicaragua (Nicaragua v. United States of America) (I. C. J. Reports, pp. 14150) (1986). [Merits, judgment]. International Court of Justice. https://www.icj-cij.org/sites/default/files/case-related/70/070-19860627-JUD-01-00-EN.pdfGoogle Scholar
Mosier, K. L., & Fischer, U. M. (2010). Judgment and decision making by individuals and teams: Issues, models, and applications. Reviews of Human Factors and Ergonomics, 6(1), 198256. https://doi.org/10.1518/155723410X12849346788822CrossRefGoogle Scholar
Mosier, K. L., & Manzey, D. (2019). Humans and automated decision aids: A match made in heaven? In Mosier, K. L. & Manzoor, D. (Eds.), Human performance in automated and autonomous systems (pp. 1942). CRC Press. https://doi.org/10.14279/depositonce-10992CrossRefGoogle Scholar
Mosier, K. L., Skitka, L. J., Dunbar, M., & McDonnell, L. (2001). Aircrews and automation bias: The advantages of teamwork? The International Journal of Aviation Psychology, 11(1), 114. https://doi.org/10.1207/S15327108IJAP1101_1CrossRefGoogle Scholar
Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems: Past, present and future. Education and Information Technologies, 26(5), 64216445. https://doi.org/10.1007/s10639-021-10597-xCrossRefGoogle ScholarPubMed
O’Driscoll, C. (2008). Renegotiation of the just war tradition and the right to war in the twenty-first century. Palgrave. https://doi.org/10.1057/9780230612037CrossRefGoogle Scholar
O’Meara, C. (2021). Necessity and proportionality and the right of self-defence in international law. Oxford University Press. https://doi.org/10.1093/oso/9780198863403.001.0001CrossRefGoogle Scholar
O’Neill, O. (1986). Who can endeavour peace? Canadian Journal of Philosophy Supplementary Volume, 12, 4173. https://doi.org/10.1080/00455091.1986.10717154CrossRefGoogle Scholar
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.CrossRefGoogle Scholar
Pettit, P. (2001). A theory of freedom: From the psychology to the politics of agency. Oxford University Press.Google Scholar
Price, R. (1998). Reversing the gun sights: Transnational civil society targets land mines. International Organization, 52(3), 613644. https://doi.org/10.1162/002081898550671CrossRefGoogle Scholar
Saunders, E. N. (2024). The insiders’ game: How elites make war and peace. Princeton University Press.CrossRefGoogle Scholar
Shepsle, K. A. (2008). Rational choice institutionalism. In Binder, S. A., Rhodes, R. A. W. & Rockman, B. A. (Eds.), The Oxford handbook of political institutions (pp. 2338). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199548460.003.0002Google Scholar
Sidey-Gibbons, J. A. M., & Sidey-Gibbons, C. J. (2019). Machine learning in medicine: A practical introduction. BMC Medical Research Methodology, 19(1), 64. https://doi.org/10.1186/s12874-019-0681-4CrossRefGoogle Scholar
Sienknecht, M. (2025). Institutionalizing proxy responsibility: AI oversight bodies and resort-to-force decision making. AI and the Decision to Go to War Special Issue, Cambridge Forum on AI: Law and Governance, 1, 119.Google Scholar
Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 9911006. https://doi.org/10.1006/ijhc.1999.0252CrossRefGoogle Scholar
Skitka, L. J., Mosier, K. L., Burdick, M., & Rosenblatt, B. (2000). Automation bias and errors: Are crews better than individuals? The International Journal of Aviation Psychology, 10(1), 8597. https://doi.org/10.1207/S15327108IJAP1001_5CrossRefGoogle ScholarPubMed
Sparrow, R. J., & Henschke, A. (2023). Minotaurs, not centaurs: The future of manned-unmanned teaming. The US Army War College Quarterly: Parameters, 53(1), 115130.Google Scholar
Stone, J. (2016, July 6 ). David cameron refuses to say the Iraq war was “wrong” or “a mistake” following damning Chilcot Report. Independent, available at https://www.independent.co.uk/news/uk/politics/david-cameron-iraq-war-chilcot-report-response-apology-refuses-live-latest-updates-a7122886.html, accessed 10 January 2025.Google Scholar
Strengers, Y., & Kennedy, J. (2020). The smart wife: Why Siri, Alexa, and other smart home devices need a feminist reboot. The MIT Press. https://doi.org/10.7551/mitpress/12482.001.0001CrossRefGoogle Scholar
Suchman, L. (2023). Imaginaries of omniscience: Automating intelligence in the US department of defense. Social Studies of Science, 53(5), 761786. https://doi.org/10.1177/03063127221104938CrossRefGoogle ScholarPubMed
Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(59), 433460. https://doi.org/10.1093/mind/lix.236.433CrossRefGoogle Scholar
United Kingdom (UK) Parliament. (2017, January 1 ). Annex: Learning Lessons from the Iraq Inquiry: The National Security Adviser’s Report. www.parliament.uk. Available here: https://publications.parliament.uk/pa/cm201719/cmselect/cmpubadm/708/70803.htm, accessed 10 January 2025.Google Scholar
United Nations. (1945). United Nations Charter. https://www.un.org/en/about-us/un-charter/full-text, accessed January 10, 2025.Google Scholar
United Nations. (2005, October 24 ). World summit outcome: General Assembly Resolution 60/1. https://undocs.org/en/A/Res/60/1, accessed January 10, 2025.Google Scholar
United Nations Secretary General (2009, January 12 ). Implementing the Responsibility to Protect: Report of the Secretary General. https://digitallibrary.un.org/record/647126?v=pdfGoogle Scholar
Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press.CrossRefGoogle Scholar
Walzer, M. (1977/2015). Just and unjust wars: A moral argument with historical illustrations (2nd edn.) Basic Books.Google Scholar
Weber, M. (1994). Weber: Political writings. In Speirs, R. & Lassman, P. (Eds.) Cambridge University Press.CrossRefGoogle Scholar
Wendt, A. (1999). Social theory of international politics. Cambridge University Press.CrossRefGoogle Scholar
Wheeler, N. (2000). Saving strangers: Humanitarian intervention in international society. Oxford University Press.Google Scholar
Wong, Y. H., Yurchak, J., Button, R. W., Frank, A. B., Laird, B., Osoba, O. A., … Bae, S. J. (2020). Deterrence in the age of thinking machines. RAND Corporation. https://www.rand.org/pubs/research_reports/RR2797.htmlCrossRefGoogle Scholar
Zala, B. (2024). Should AI stay or should AI go? First strike incentives & deterrence stability. Australian Journal of International Affairs, 78(2), 154163. https://doi.org/10.1080/10357718.2024.2328805CrossRefGoogle Scholar