1. Introduction
Computer says, “war”? Our title is intended as a provocation. Yet, it is not the product of futuristic forebodings directed at “the singularity,” a hypothetical point at which artificial intelligence (AI) would surpass our human capacities, escape our control, and (some argue) threaten our existence.Footnote 1 Rather, it aims to paint a picture that is more mundane and more immediate. Our title depicts a scenario in which existing AI-driven systems influence state-level decisions on whether and when to wage war. Such a scenario is soberly informed by what we maintain are impending changes in strategic decision making – changes that we have robust reasons to anticipate and can already observe in nascent form.
AI is increasingly relied upon to support, and even substitute for, human decision making in a host of realms, including the criminal justice system, medical diagnostics, and aviation. AI-driven systems that rely on machine-learning (ML) techniques can analyse huge quantities of data quickly, reveal patterns of correlation in datasets that would be impossible for human actors to uncover, and draw on these patterns to identify risks and make predictions when confronted with new data. These capacities would seem to make such systems obvious assets when it comes to state-level decision making on the initiation of armed conflict. After all, assessments of, inter alia, current threats, the likely actions of adversaries, and the consequences of possible courses of action and inaction are fundamental to such deliberations. Indeed, the diagnostic and predictive functions of ML decision systems have reportedly already been tested in the context of anticipating adversaries’ acts of military aggression – for example, by the national security consultancy Rhombus Power and by Silicon Valley contractors working with US special forces (McChrystal & Roy, Reference McChrystal and Roy2023; “How America built an AI tool to predict Taliban attacks,” 2024; see Erskine & Davis, Reference Erskine and Davis2026 for a discussion). Moreover, AI-enabled systems currently contribute indirectly to state-level decision making on the initiation of armed conflict through their use in intelligence collection and analysis (Deeks et al., Reference Deeks, Lubell and Murray2019; pp. 2, 6; Logan, Reference Logan2024; Suchman, Reference Suchman2023). Given the ubiquity of these technologies across multiple decision-making domains, their potential value in contributing to deliberations over war initiation, and states’ perceived need to maintain a strategic advantage vis-à-vis their adversaries, we suggest that more direct contributions to state-level resort-to-force decision making by existing AI-enabled systems are imminent and inevitable.
The prospect of existing AI technologies more directly contributing to state-level decisions on whether and when to wage war is conceivable in two general ways. AI-driven systems acting autonomously could independently calculate and carry out courses of action that would constitute the resort to war. Here, we might think of various manifestations of automated self-defense, where, in specific contexts, the initiation of armed conflict could be effectively delegated to a machine. This could occur, for example, through potentially volatile interactions between autonomous underwater or aerial vehicles leading to unintended escalations, or as a result of automated responses to cyber attacks (see, for example, Deeks, Reference Deeks2024, Reference Deeks2026; Deeks et al., Reference Deeks, Lubell and Murray2019, pp. 7–10, 18–19). Alternatively, non-autonomous AI-driven systems in the form of so-called “decision-support systems” could contribute to deliberations over the resort to force by providing predictions or recommendations that inform human decision making (see, for example, Davis, Reference Davis2024; Deeks et al., Reference Deeks, Lubell and Murray2019, pp. 5–7, 10–11; Erskine, Reference Erskine2024b). In these latter cases, although some human actors in the decision-making chain might be displaced, human decision makers would remain the final arbiters of whether and when to resort to force.
By focusing on these possibilities, this Special Issue – like the earlier collection (Erskine & Miller, Reference Erskine and Miller2024b) upon which it builds and the broader research project from which both have emerged – embodies a significant shift in focus when it comes to analysing military applications of AI. Amongst both scholars and policy makers, there is a prevailing preoccupation with AI-driven systems used in the conduct of war: weapons that possess various degrees of autonomy, including, most prominently, the emerging reality of “lethal autonomous weapons systems” (LAWS); and decision-support systems that rely on big data analytics and machine learning to recommend targets, such as those that have gained particular notoriety through their use by Israel in the war in Gaza (see, for example, Davies, McKernan & Sabbagh, Reference Davies, McKernan and Sabbagh2023; Yuval, Reference Yuval2024). These are important objects of study. Yet, our attention should also encompass the relatively neglected prospect of AI-driven systems that would variously determine or inform the resort to war.Footnote 2 As we have described previously, the shift in focus that we are advocating takes us “from AI on the battlefield to AI in the war-room” (Erskine & Miller, Reference Erskine and Miller2024a, p. 138). Rather than focusing on tactical decisions surrounding selecting and engaging targets in armed conflict, we explore the necessarily prior stage of state-level, strategic deliberation over the very initiation of war and military interventions. As such, we move from jus in bello to jus ad bellum considerations in the language of the just war tradition, and from actions adjudicated by international humanitarian law to actions constrained and condoned by the United Nations (UN) Charter’s (1945) prohibition on the resort to force and its explicit exceptions.
As we acknowledge and anticipate this emerging application of AI in the military domain, there is an urgent need to identify the legal, ethical, sociotechnical, political, and geopolitical challenges that will accompany it – and determine how best to respond to them. This is the task of the 13 research articles that follow. Our aim here is to introduce this collection by briefly highlighting four recent developments that we suggest are directly relevant to how AI-enabled systems will infiltrate and influence resort-to-force decision making. Specifically, we address the following: the widespread tendency to misperceive the latest AI-enabled technologies as increasingly “human”; the changing role of “Big Tech” in the global competition over military applications of AI; a conspicuous blind spot in current discussions surrounding international regulation; and the emerging AI-nuclear weapons nexus. AI-driven systems will increasingly inform state-level decisions on whether and when to wage war. Together, these four factors represent a backdrop of rapid change and profound global uncertainty that will affect the trajectory of this phenomenon. Each must be addressed as scholars and policymakers determine how best to prepare for, direct, and respond to the prospect of AI-informed decisions on the initiation of armed conflict.
2. The illusion of increasingly “human” AI
While the phenomenon of AI infiltrating resort-to-force decision making promises far-reaching geopolitical effects, when it comes to recent transformations that will significantly influence the nature of this impact, it is necessary to zoom in on a subtle shift at the individual human level of analysis. The first point of change and upheaval that we will address is simply how human actors have come to perceive and interact with rapidly evolving and seemingly ubiquitous AI-driven systems.
By “AI” we mean, simply, the capacity of machines to imitate aspects of intelligent human behaviour – noting that this general label is also often used for the wide range of technologies that display this capability. Yet, the significance of this defining feature of imitation risks being overlooked as human users stand in awe of emerging AI-driven technologies, particularly those inferential models that rely on ML techniques. In November 2022, “ChatGPT” (short for “generative pre-trained transformer”) was launched by OpenAI.Footnote 3 This was followed by Meta’s “Llama,” Anthropic’s “Claude,” and Google’s “Gemini,” all launched in 2023, with increasingly sophisticated iterations of each family of models appearing in rapid succession. The ability of these large language models (LLMs) to pass the so-called “Turing test” (75 years after it was first proposed) by convincingly mimicking human expression has been startling (Jones & Bergen, Reference Jones and Bergen2025).Footnote 4 Yet, it is the perception – or misperception – of what this means that has the potential to be profoundly consequential.
Users tend to embrace a conception of these generative AI models becoming increasingly “human” – a conception reflected, in its most enthusiastic form, in unfounded claims that we have already achieved, or are on the brink of achieving, artificial general intelligence (AGI). Yet, while LLMs may appear to deliberate and exercise judgment in response to our questions and prompts, understand what they are conveying, and even engage in critical self-reflection, they possess none of these capacities. A number of problems follow from this misalignment. First, it can lead to a misunderstanding about how these systems function. For example, there is a common misperception that LLMs “make mistakes” when they generate false information and that they sometimes “make things up” (or “hallucinate”). Both accounts are misguided. These systems are not designed to seek the truth, but rather to produce probabilistic outputs. They function by statistical inference, simply predicting which symbol is likely to follow in a series. When these outputs contain false information, the models are not making a mistake but, rather, doing exactly what they were designed to do. Moreover, it is not the case that they sometimes make things up; they always make things up. Sometimes this happens to correspond with reality.
A second problem, which follows on from the first, is that it is easy to overlook the limits of such systems. People may reasonably value the computational abilities of AI-driven systems that do indeed surpass those of human actors, but will run into difficulty if they imagine these to be accompanied by certain human capacities. In other words, expecting such systems to be able to seek truth and carefully reflect on recommendations leaves users unprepared for the system’s inability to do so. Moreover, false confidence in the capacities that these systems merely mimic, and an accompanying lack of attention to their limitations, will only exacerbate existing deference to their outputs – a deference that already results from susceptibility to “automation bias,” or the human tendency to trust, accept, and follow computer-generated outputs.Footnote 5 Finally, such misperceptions of the capacities of AI-driven systems might lead to mistaken (perhaps wishful) attributions of an agency that would ostensibly allow them to bear responsibility for particular actions and outcomes, thereby letting their users – including political and military leaders charged with weighty decisions of whether and when to wage war – off the moral hook (Cummings, Reference Cummings2006, p. 28; Erskine, Reference Erskine2024a, pp. 552–54; Erskine Reference Erskine2024b, pp. 180–82).
When it comes to AI-driven technologies and resort-to-force decision making, the illusion of increasingly “human” AI could have grave geopolitical repercussions if it means that AI-generated outputs are perceived as the product of truth-seeking, reasoned, and even ethically deliberated judgments. Decision makers might thereby be more willing to either defer to the predictions and recommendations of these systems (in the context of AI-enabled decision-support systems) or delegate decision making and action to them (in the case of automated defense systems). The danger, we suggest, is not the appearance of “emergent properties” in AI systems that would see them escape human control and threaten our collective existence, but, rather, simulated capacities that may entice those in power to cede responsibility for important decisions and actions.
This dangerous illusion and resulting temptation to abdicate responsibility for crucial decisions and actions are exacerbated in the context of our second point of change and upheaval: the perceived geopolitical “race” to develop and employ these AI-enabled systems for military advantage.
3. The accelerating, militarized “race” of the tech leviathans
Accounts of a global AI “race” abound. Descriptions are offered, for example, of an “AI arms race” (Haner & Garcia, Reference Haner and Garcia2019; Knight, Reference Knight2023; Simonite, Reference Simonite2017; United Nations, 2024), an “AI military race” (Garcia, Reference Garcia2023), a “geopolitical innovation race” surrounding AI research and development (Schmid et al., Reference Schmid, Lambach, Diehl and Reuter2025), and a “technology race to adopt AI” (Scharre, Reference Scharre2021, p. 122). These are variously conceived with reference to geopolitical competition over military applications of AI, often with a specific focus on LAWS, or in relation to AI innovation more generally, whereby states – most prominently the United States and China – compete for technological leadership with the aim of achieving not only military but also economic and political superiority. Indeed, laments of an American race lost, or forfeited, heralded the release of a new chatbot in January 2025 by the Chinese start-up, DeepSeek (Belanger, Reference Belanger2025, pp. 3–4). Despite debates over labels and appropriate analogies, there is general agreement that the global competition over AI innovation is imbued with a sense of urgency.
We are concerned here with one aspect of this perceived race: the emerging role of powerful tech companies in what is an increasingly militarized competition. This move has been accompanied by these corporations abandoning hitherto self-imposed ethical guidelines, along with bold justifications for this reversal. Our second snapshot of change brings us back precisely to those LLMs addressed in the previous section and, we suggest, bolsters our assessment of the inevitable and imminent use of such AI-enabled systems in state-level decision making on the resort to force.
Big Tech is not merely part of a global competition in AI innovation. With striking changes to their own policies and purported values, American tech companies are now increasingly contributing to the specifically military dimension of this competition – a move that Assaad (Reference Assaad2025) has aptly described as the “militarization of commercial AI.” In late 2024, Anthropic, Meta, and OpenAI each entered agreements that make their AI models available to US government agencies through defense-industry partners and explicitly framed these deployments as serving defense and national security (Assaad, Reference Assaad2024, Reference Assaad2025; Wiggers, Reference Wiggers2024). In November 2024, Anthropic partnered with the tech data analytics firm Palantir Technologies, alongside cloud-infrastructure provider Amazon Web Services (AWS), to make its Claude AI models available to US defense and intelligence agencies (Palantir, 2024; see also Rosenberg, Reference Rosenberg2024). Palantir (2024) promised that the result would be “new generative AI capabilities” that would “dramatically improve intelligence analyses and enable officials in their decision-making processes.”
The same month, Meta announced that it would make its open-source Llama AI models available to US defense and national security agencies through partnerships with defense and technology companies, including the private AI company Scale AI (Clegg, Reference Clegg2024; see also Outpost, 2024; Sadashiv, Reference Sadashiv2024). As part of this partnership, Scale AI has introduced “Defence Llama,” an LLM built on Meta’s Llama 3, which Scale AI has promoted as being “fine-tuned to support American national security missions” by enabling “military operators, strategic planners, and decision-makers” to “apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities” (The Scale Team, 2024).
In December 2024, OpenAI, creator of ChatGPT, entered a “strategic partnership” with defense technology startup Anduril Industries “to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions,” with the specific aim to “address urgent Air Defence capability gaps across the world” and “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations” (Anduril, 2024; see also Assaad, Reference Assaad2024; Ferran, Reference Ferran2024; Shanklin, Reference Shanklin2024).Footnote 6 Notably, these national-security deployments of generative AI models – with explicit promises of high-level decision making, strategic planning, and command-and-control functionsFootnote 7 – map directly on to the types of tools and applications required for the AI-enabled resort-to-force decision making (in both its automated self-defense and decision-support system variations) that we have been arguing is on the horizon (Erskine & Miller, Reference Erskine and Miller2024a, pp. 138–39; Erskine, Reference Erskine2024b, pp. 175–78).Footnote 8
These public declarations by American tech corporations to support military endeavours and act in the service of defense and national security are also notable because they represent a considerable and consequential shift in policy and purpose. Indeed, in the case of Meta, this commitment effectively circumvented its own guidelines (subsequently reiterated in relation to Llama 4) prohibiting the use of its Llama model “to engage in, promote, incite, facilitate, or assist in the planning or development of” activities related to “military, warfare, nuclear industries or applications” (Meta, 2025; see also Assaad, Reference Assaad2024).Footnote 9 Not only has Meta allowed derivative models to be regulated under its partners’ more permissive policies, but it has effectively enabled military-use exceptions through partner licensing and model fine-tuning, even while retaining formal prohibitions in its public policy language (Moorhead, Reference Moorhead2024).
OpenAI and Google have made comparable moves, although by abandoning rather than circumventing their respective codes. In January 2024, OpenAI adjusted its position on allowing its models to be used for military applications. While it retained its prohibition against employing its models for the development or use of weapons systems, OpenAI nevertheless discarded its prohibition against contributing to other “military and warfare” applications (Frenkel, Reference Frenkel2025; Shanklin, Reference Shanklin2024). Likewise, in February 2025, Google announced a fundamental change to its AI principles (Manyika & Hassabis, Reference Manyika and Hassabis2025; see also Assaad, Reference Assaad2025; Bacciarelli, Reference Bacciarelli2025). As originally published in 2018, these principles had included commitments “not to design or deploy AI” in a number of application areas, including: “[t]echnologies that cause or are likely to cause overall harm,” “[w]eapons and other technologies whose principal purpose or implementation is to directly facilitate injury to people,” and “[t]echnologies that gather or use information for surveillance violating internationally accepted norms” (Pichai, Reference Pichai2018). Surprisingly, all of these prohibitions were removed in its revised principles (Assaad, Reference Assaad2025; Bacciarelli, Reference Bacciarelli2025; Google, 2025). Within an industry where, until recently, forswearing contributions to military endeavours had been the norm, this seismic cultural shift was perhaps most vividly displayed in June 2025 when four prominent Silicon Valley executives were commissioned as lieutenant colonels in the US Army (Desmarais, Reference Desmarais2025; Levy, Reference Levy2025).Footnote 10
How the American tech industry has defended its fundamental realignment with national security priorities is significant, not least because these actors overwhelmingly determine global AI capacity and direction. Rather than casting aside ethical commitments as such, they have espoused an alternative moral compass. We can reasonably assume that these tech corporations have followed the promise of financial gain by declaring a commitment to support states such as the United States in achieving an AI advantage in the military realm. However, they justify this move and corresponding reversal of policy in often overtly moral language through impassioned appeals to: the urgency of winning the increasingly militarized AI “race” at what they portray as a moment of extreme geopolitical risk; duties to protect national security and preserve the “shared values” of Western democracies; and even a commitment to good over evil.
In offering an account of why Google jettisoned its AI principles, Google representatives explained that “[t]here is a global competition taking place for AI leadership within an increasingly complex geopolitical landscape” and expressed an imperative that “companies, governments, and organizations…that share core values” cooperate to create AI that “protects people, promotes global growth, and supports national security” (Manyika & Hassabis, Reference Manyika and Hassabis2025). According to a Meta spokesperson, “[r]esponsible uses of open source AI models promote global security and help establish the U.S. in the global race for AI leadership” (Clegg, Reference Clegg2024). He added that “[w]e believe it is in both America and the wider democratic world’s interest for American open-source models to excel and succeed over models from China and elsewhere” (Clegg, Reference Clegg2024).Footnote 11 In a similar vein, in the context of announcing its partnership with OpenAI, Anduril Industries (Anduril, 2024) claimed that:
The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades. The decisions made now will determine whether the United States remains a leader in the 21st century or risks being outpaced by adversaries who don’t share our commitment to freedom and democracy and would use AI to threaten other countries.
Continuing this theme of supreme geopolitical emergency, the CEO of Palantir Industries (with which Anthropic recently partnered), Alex Karp, described the race to develop AI technologies, inlcuding to advance LLMs, as “our Oppenheimer moment” (Karp, Reference Karp2023). This historical reference to the race to develop the first nuclear weapons underscores how industry leaders present current geopolitical stakes as allowing, even requiring, what would otherwise be considered morally unacceptable (Ó Héigeartaigh, Reference Ó hÉigeartaigh2025a, p. 11). Even more starkly, the chief technology officer of Palantir, Shyam Sankar, reflected in an interview in October 2025 that “the moral view we have, which is why we started the company, [is] that the West is a force for good…There is evil in the world and that evil is not us” (Douthat, Reference Douthat2025). These positions embraced by prominent American tech companies – by which they not only espouse moral imperatives to grant themselves permission to dispatch their data-driven models for military endeavours, but also support narratives of an accelerating, militarized AI race in order to establish the urgency of doing so – are particularly significant for the strategic applications of AI-driven technologies in which we are interested.
These tech leviathans are not just joining the perceived geopolitical race over militarized AI innovation. They are actively contributing to its creation. And they have strong incentives for doing so that are commercial rather than security-focused. Appeals to national security and the notion of a civilizational life-or-death geopolitical race serve the interests of these corporations when faced with regulations that would constrain their growth, such as those related to intellectual property protections, for example (Berger, Reference Berger2025, p. 6). Both OpenAI and Google are lobbying the US government to designate AI training on copyrighted data as “fair use,” arguing that this access is crucial for maintaining the United State’s competitive edge in the global AI arena (Berger, Reference Berger2025). Indeed, invoking risks to national security, OpenAI has declared that “the race for AI is effectively over” if US companies are denied such access while “the PRC’s developers have unfettered access to data” (Belanger, Reference Belanger2025). While Big Tech companies make bold appeals to national and civilizational survival in the context of this lobbying, they also frequently fund policy discourse that emphasises, and perpetuates, national-security fears and geopolitical rivalry (Ó Héigeartaigh, Reference Ó hÉigeartaigh2025a).
Notably, new pathways for commercial AI development to directly serve military objectives create dependencies between defense capabilities and private sector AI advancement. This convergence raises difficult questions about the boundaries between commercial and military AI development, particularly as these companies’ AI models become integral to both civilian and defense applications.Footnote 12 Such dependencies begin to challenge the state’s ostensible monopoly on the legitimate use of force in new and complex ways. After all, these tech corporations create the conditions under which the state’s military capabilities are increasingly mediated through, and reliant upon, private-sector systems.Footnote 13 These dependencies also add credence to what we have argued is the imminent and inevitable use of AI-enabled systems in resort-to-force decision making. We have already made this case based both on the ubiquity of such systems across other realms of human decision making and states’ perceived need to match adversaries’ capabilities. This development represents yet another reason to conclude that AI will increasingly and directly infiltrate resort-to-force decision making. Simply, moving towards algorithm-informed deliberations over whether and when to wage war has become part of a Big Tech business model. Arguably, so has war itself.
This evolving military role of the tech leviathans, along with the incentives and allowances that it creates for them to champion unfettered innovation, leads us directly to questions of international regulation. Our third snapshot of (somewhat slower) change aims to capture a conspicuous regulatory blind spot and emerging awareness of the need to consider constraints on military applications of AI beyond the battlefield.
4. An international regulatory blind spot
When it comes to military applications of AI generally, attempts at international regulation are already curtailed by states’ unwillingness to accept limits that could result in strategic disadvantage by constraining what they are able to develop and deploy. This is part of the geopolitical competition just addressed. Yet, when we turn specifically to the infiltration of AI-enabled systems into resort-to-force decision making, there is also another problem in the form of a regulatory blind spot. Simply, while acknowledging that the process of achieving consensus on international norms governing military applications of AI is bound to be fraught with difficulties, we note that constraints on the use of AI-driven systems for state-level resort-to-force decision making have rarely even made it onto the negotiating table.Footnote 14 This is despite the fact that we have every reason to assume that widely acknowledged risks that accompany the use of these technologies in the conduct of war – such as deference to machine-generated outputs stifling and ultimately eroding human judgment, for example – will also appear if and when they are used for deliberation over its initiation, and with the potential for even greater resulting harm.
Attempts at the international regulation of military applications of AI tend to be directed towards systems that are used on the battlefield, in the conduct of war, rather than those that may infiltrate the war-room and otherwise contribute to crucial stages of decision making and action that would lead to the initiation of organised violence. Examples of this restricted focus can be found in prominent bodies and resolutions that address military applications of AI – and have achieved important areas of consensus (albeit on principles that are legally nonbinding and generally the least contentious of those proposed) – exclusively with respect to AI-enabled weapons. These include the Group of Government Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS GGE) and the UN General Assembly Resolution 78/241 on Lethal Autonomous Weapons (UN CCW, Reference United Nations CCW2019; United Nations, General Assembly, 2023). Moreover, the consensus reached following the first “Responsible AI in the Military Domain” (REAIM) Summit, a multistakeholder dialogue that brought together representatives from governments, industry, academia, and civil society organizations, held in The Hague in February 2023, was similarly restricted in scope. The 2023 REAIM Summit resulted in the US-led “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy,” which was signed by 51 countries in November of the same year (Bureau of Arms Control, Verification, and Deterrence, 2023). In the Declaration, “military AI capabilities” are discussed exhaustively in the guise of “weapons systems,” with the prescription that states “take appropriate steps, such as legal reviews, to ensure that their military AI capabilities will be used consistent with their respective obligations under international law” qualified as “international humanitarian law,” which governs the conduct of war (Bureau of Arms Control, Verification, and Deterrence, 2023).Footnote 15
This regulatory blind spot is due to multiple factors. To begin, as we have already suggested, the focus on AI-enabled systems in war has been directed specifically towards those that contribute to its conduct rather than its initiation. The resulting neglect affects both categories of AI-enabled systems that we have suggested could inform the resort to force: automated defense systems and AI-enabled decision-support systems. Yet, such neglect is compounded when it comes to the latter category. This is because ML-driven strategic decision-support systems that would draw on the generative AI models discussed above are not readily perceived as threats that warrant regulation – a point exacerbated by the dual-use nature of these systems, which can be used for both strategic decision making and civilian applications.Footnote 16 Unlike both LAWS and AI-enabled decision-support systems used for targeting in the conduct of war, their use in state-level strategic and political decision making is not seen to be directly implicated in the “kill chain” – even though they could help create the conditions that would make myriad “kill chains” possible. In short, the harm to be mitigated through regulation is seen to accompany weapons deployment rather than upstream decision logic. Simply, AI-driven decision-support systems that would metaphorically enter the war-room do not fit the mould of what is to be regulated. In addition to this innocuous guise, their persistent invisibility when it comes to efforts at regulation is further heightened by the practical reality of the cloistered, necessarily secretive nature of the national security deliberations for which they would be usedFootnote 17 – and, of course, by states’ likely reticence to consider restrictions that affect their internal decision making and sovereign war-making prerogative.
Finally, our proposal that AI-driven systems will increasingly infiltrate resort-to-force decision making remains speculative. This is speculation that we have argued is informed by significant changes in a host of other realms; rests on technologies that we have now; and points to near-future (indeed nascent) developments that are currently being ushered into existence (Erskine & Miller, Reference Erskine and Miller2024a, p. 135). Nevertheless, the speculative nature of these anticipated developments may mean that they are easily overlooked due to a “failure of imagination” in terms of what is seen to be possible, where prospective risks lie, and what should therefore be regulated.Footnote 18
Nevertheless, since the publication of our previous collection calling for greater attention to the influence of AI on deliberations over the resort to force (Erskine & Miller, Reference Erskine and Miller2024b), there have been promising gestures towards the need to also consider, and possibly curtail, the impact of military applications of AI beyond the battlefield. The second REAIM Summit, convened on 9–10 September 2024, in Seoul, Republic of Korea, produced a “Blueprint for Action,” officially endorsed as the outcome document of the Summit. Although nonbinding and largely aspirational, it was supported by 61 states, including the United States, the United Kingdom, Australia, and Japan (Ministry of Foreign Affairs, Republic of Korea, 2024). This “Blueprint for Action” was described by the organisers of the Summit as laying out “a roadmap for establishing norms of AI in the military domain” (REAIM, 2024). There is evidence that the norms that it envisions are not limited to the previous Summit’s focus on LAWS and other AI-driven systems deployed in the conduct of war. The Blueprint for Action opens with an acknowledgement that the aspects of military affairs that AI has the potential to transform include not only military operations but also “command and control, intelligence, surveillance and reconnaissance (ISR) activities, training, information management and logistical support” (Ministry of Foreign Affairs, Republic of Korea, 2024). Moreover, in its numbered recommendations, it includes amongst risks that accompany AI applications in the military domain “risks of arms race, miscalculation, escalation and lowering threshold of conflict” (3), and mentions applicable international law beyond IHL, including the UN Charter (7) (Ministry of Foreign Affairs, Republic of Korea, 2024) – all indications of an emerging acknowledgement of the potential role of AI-driven decision systems in the initiation of war. Furthermore, the Blueprint for Action recognises the role of AI in cyber operations (4) and warns that “it is especially crucial to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment” (5) (Ministry of Foreign Affairs, Republic of Korea, 2024). In sum, despite its enduring focus on AI-enabled weapons and decision-support systems used specifically for combat operations, on each of these points, the Blueprint for Action inches into the territory of AI-enabled systems that could either inform or temporarily displace human decision making on the resort to force.
The UN General Assembly has also recently signalled a shift in its attention beyond LAWS. In December 2024, in Resolution 79/239 adopted by the UN General Assembly, the UN Secretary-General was asked to seek the views of “Member States…observer States… international and regional organizations, the International Committee of the Red Cross, civil society, the scientific community and industry” on the “opportunities and challenges posed to international peace and security by the application of artificial intelligence in the military domain, with specific focus on areas other than lethal autonomous weapons systems” (United Nations, General Assembly, 2024).Footnote 19 A substantive report by the UN Secretary-General, which summarises submitted views and catalogues “existing and emerging normative proposals” pursuant to resolution 79/239, followed in June 2025 (United Nations, General Assembly, 2025). This emerging shift in focus is significant – even if it is only in its early stages – and has been both prompted and foreshadowed by the articles in this special issue.Footnote 20
Attempts at international regulation of military applications of AI must overcome persistent blind spots and consider the influence of both autonomous and non-autonomous AI-enabled systems on the resort to force. When it comes specifically to AI-enabled decision-support systems that become embedded upstream in threat perception and strategic assessment, regulation must address the cognitive pipeline through which war decisions are shaped. How this is done, what principles of restraint are required, and who or what should be regulated – Big Tech as an emerging agent of war is a challenging and crucial candidate to consider – are questions that require urgent attention.
Notably, there is one area where AI’s potential influence on resort-to-force decision making has already produced some consensus on the need to cooperate and impose constraints: at the intersection of AI and nuclear weapons, the subject of our final snapshot of consequential change.
5. The AI – nuclear weapons nexus
In response to the wave of attention that has been devoted to the feared existential threat posed by future generations of AI, we have maintained the importance of uncovering and attempting to mitigate the immediate risks that accompany already-existing AI-driven technologies. Our concern has been that a preoccupation with the prospect of AGI as an existential threat occludes dangers with which we are already confronted (Erskine, Reference Erskine2024b, p. 176). Nevertheless, in one important domain, related specifically to resort-to-force decision making, our current iterations of AI do threaten to contribute to an existential threat – that is, the existential threat posed by nuclear weapons.
From the dawn of the nuclear age, it has been recognized that nuclear weapons can inflict massive, catastrophic harm, that they represent what Bertrand Russell and Albert Einstein, in a famous 1955 manifesto, identified as a species-threatening technology. “We are speaking on this occasion,” Russell and Einstein wrote in one of the most dramatic appeals of the nuclear age (and in cosmopolitan language starkly at odds with Big Tech’s more recent and particularistic calls to action in the face of existential peril), “not as members of this or that nation, continent, or creed, but as human beings, members of the species Man, whose continued existence is in doubt.”Footnote 21 How will AI interact with this technology that possesses such enormous destructive potential, that in the worst case is capable of destroying modern civilization? Because the stakes are so high and the risks are so real and immediate, this AI-nuclear weapons nexus looms particularly large in thinking about the implications of AI and resort-to-force decision making.
AI could figure in decisions about whether to use nuclear weapons in three ways: It could be allocated decision-making discretion; it could serve as an input, or even the input, in human decision making; and it could influence decisions as a consequence of its impact on the nuclear balance. There is already much debate about the cumulative effect of AI on the nuclear order, with some commentators arguing that it can have useful, benign, or stabilizing consequences.Footnote 22 However, it is the risks and potential adverse impacts that compellingly command attention due to the real possibility of truly disastrous outcomes. As one recent analysis urges, “it is essential for policymakers to be aware of the risks posed by the AI-nuclear nexus” (Stokes et al., Reference Stokes, Kahl, Kendall-Taylor and Lokker2025, p. 1). Failure to address those risks, it concludes, “could leave the country and the world dangerously exposed to risks and ill-prepared to seize any opportunities arising from the increasingly salient AI-nuclear nexus” (Stokes et al., Reference Stokes, Kahl, Kendall-Taylor and Lokker2025, p. 1). Depp and Scharre (Reference Depp and Scharre2024) convey the concern in stark terms: “Improperly used, AI in nuclear operations could have world-ending effects.”
5.1. Allocating nuclear decision-making discretion to AI
One possible nuclear application of AI seems to have died a rapid death. There appears to be very little inclination anywhere to fully automate nuclear-use decision making, to turn an extraordinarily consequential decision over to AI – though in principle this could bolster deterrence by increasing the likelihood and effectiveness of retaliation. As Tom Nichols (Reference Nichols2025a) has explained, “Some defense analysts wonder if AI – which reacts faster and more dispassionately to information than human beings – could alleviate some of the burden of nuclear decision making. This is a spectacularly dangerous idea.” Relevant decision makers around the world have responded to this “spectacularly dangerous idea” by rejecting it.
In recent years, attention to the AI-nuclear weapons nexus has produced a profusion of assertions and assurances from many countries and in various settings that nuclear decision making should and will remain in human hands. The world seems to have converged on the notion that there should always be a “human in the loop” in this context, meaning simply that human actors must be active participants in AI-assisted decisions and outcomes and must be the ones to instigate an action; no action is to be taken without human intervention.Footnote 23 In its 2022 Nuclear Posture Review and subsequent official documents, for example, the United States has repeatedly made this pledge. In its recent update of nuclear planning guidance, the US Department of Defense (2024, p. 4) reiterates, “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapons employment.”Footnote 24 The issue has figured in high-level diplomacy. A notable instance is the Sino-American meeting in November 2024, at which President Biden and President Xi agreed that nuclear use decisions should remain in human control.Footnote 25 This issue has similarly figured in multilateral diplomacy. At the Review Conference of the Nuclear Nonproliferation Treaty (NPT) in 2022, France, the United Kingdom, and the United States submitted a document on “Principles and Responsible Practices for Nuclear Weapon States,” which includes a pledge by their governments to “maintain human control” of nuclear decision making (France, United Kingdom & United States, 2022, para. 5.vii). Moreover, as already noted, the 2024 Blueprint for Action, which emerged from the second REAIM Summit, categorically states that “human control and involvement” is essential for all actions that are “critical to informing and executing sovereign decisions concerning nuclear weapons employment” (Ministry of Foreign Affairs, Republic of Korea, 2024). In short, although AI offers the potential for automating at least some military decision making and creating (partially or fully) autonomous military capabilities, when it comes to the nuclear realm, there appears to be international consensus that human discretion is imperative. An algorithm should not make decisions that could lead to Armageddon.
5.2. Keeping humans in the loop
The expectation is, then, that there will be a human in the loop when nuclear use decisions are made. The allocation of decision-making discretion to AI is thus avoided – precluding the long-imagined and long-feared nightmare of machine-driven employment of nuclear weapons (Nichols, Reference Nichols2025b). But as AI is integrated into intelligence analysis and decision-support systems, it is likely increasingly to be a significant input into the calculus of top decision makers. Despite the comforting “human-in-the-loop” assurance that decision-making authority will remain with human actors, individuals at the apex of power, making momentous decisions, will be utilizing – perhaps heavily relying upon – data, conclusions, and recommendations generated by AI. Ensuring that humans are in the loop, therefore, is not enough. We need also to ask: What are the implications of AI-enabled systems for nuclear decision making even with humans in the loop?
The answer requires that fundamental questions be addressed. In the era of Putin and Trump, for example, it seems obvious that it matters which humans are in the loop.Footnote 26 But what does it mean to have a human in the loop? Intelligence operations or command-and-control arrangements that are heavily dependent on AI may have a human making decisions at the top, but AI-generated outputs may become the dominant contributing factors in deliberations over the use of nuclear weapons. Will the leader understand this loop and their role in it? There are strong claims in the literature that this is not possible. “No single human,” write Leins and Kaspersen (Reference Leins and Kaspersen2021), “has the capacity to understand and oversee” complex AI systems. Can the human control the loop? Calls for human actors to be the final arbiters of nuclear decisions, and maintain effective human oversight of decision-making processes, collide with the automation bias already discussed and with the speed, scale, complexity, opacity, and incomprehensibility of AI-enabled systems examined in subsequent contributions to this Special Issue. If the moment ever arrives, the decision to use nuclear weapons will be taken by a small group of human beings (in the United States the president alone has “sole authority” to order nuclear employment) who, even while remaining formally in the nuclear-decision-making loop, risk having their roles collapse into merely endorsing computer-generated outputs that are imbued with authority and that they can neither understand nor effectively audit.
And will there be a single loop? Modern governments are large, complex bureaucracies filled with competing interests and organizations that are likely to use AI in their own way for their own purposes. In the realm of intelligence, the United States is probably an extreme example of this reality: its intelligence community is comprised of 18 different civilian and military organizations, loosely overseen by a Director of National Intelligence, with an aggregate annual budget in excess of $100 billion (Office of the Director of National Intelligence, 2025). In theory, it is possible to imagine that this enormous complex will provide the President with clear, coherent, consolidated AI-informed advice, but, in practice, these organizations are often jealous of their own roles and prerogatives and see others as rivals rather than partners. These “fiercely independent” units, as Dexter Filkins (Reference Filkins2025) has written, do not “cooperate smoothly.” Filkins (Reference Filkins2025) quotes a Washington insider to illustrate the point:
The Air Force won’t work with the Navy. The Army won’t work with the Air Force. The N.S.A. [National Security Agency] won’t work with anybody. The National Reconnaissance Office won’t work with anybody. The National Reconnaissance Office and the National Geospatial-Intelligence Agency are both supposed to work with the N.S.A.—and they won’t talk to each other.
In such an environment, it seems probable that different organizations, and even different groups within an organization, will develop and utilize their own AI-enabled systems, each producing their own analyses and outputs. This raises the possibility that the leader might receive competing AI-generated assessments and recommendations. Even if intelligence procedures were to produce a consolidated recommendation to the leader, this would nevertheless represent an amalgamation of, or possibly a least common denominator among, alternative AI outputs. It is plausible that leaders, at a moment of crisis and while contemplating the use of nuclear weapons, will have to contend with multiple “loops”, compounding problems of deference and opacity.Footnote 27
Despite these challenges, keeping humans in the loop is accepted as the only sensible response to the possible application of AI to nuclear decision making. However, the risks and challenges associated with AI-informed decision making about nuclear weapons use also introduce a new family of concerns about nuclear escalation (Chernavskikh & Palayer, Reference Chernavskikh and Palayer2025).
5.3. AI and the stability of the nuclear balance
AI can also influence decision making about the use of nuclear weapons by its effect on the nuclear balance, which can alter the incentives and calculations of leaders. This will depend on the character of forces and doctrines and on whether the AI impacts are stabilizing or destabilizing, favorable or unfavorable. As Lin (Reference Lin2025, p. 99) notes, “AI’s risks and benefits in nuclear command and control cannot be assessed apart from a nation’s nuclear posture, which includes force structure, arrangements and infrastructures for nuclear command and control, doctrine, and strategic priorities.” A study from the US Center for Naval Analyses (McDonnell, Chesnut, Ditter, Fink & Lewis, Reference McDonnell, Chesnut, Ditter, Fink and Lewis2023, p. 24) similarly links the intersection of AI, doctrine, and force posture to the “risk calculus” of decision makers: “Use of AI for nuclear operations also has the potential to increase the risk of nuclear mission failure or escalation by altering the actual or perceived balance of military power among nuclear-armed states. This is because the balance of military power is a powerful driver of leaders’ decisions to wage war or to escalate an ongoing war.”
The rapidly emerging literature on AI and nuclear weapons is generally careful to recognize that AI can have stabilizing as well as worrying consequences in the nuclear realm (and in military competitions more broadly) and that the net impact is not inherently negative (Burdette, Mueller, Mitre & Hoak, Reference Burdette, Mueller, Mitre and Hoak2025; see also Holmes & Wheeler, Reference Holmes and Wheeler2024). There are, however, a number of ways in which AI can potentially increase leaders’ confidence in their nuclear advantages (which they might wish to exploit) or undermine their confidence in the survivability of their deterrent forces (which could produce preemptive incentives to escalate in a crisis).Footnote 28 Persistent surveillance combined with rapid intelligence assessment can make mobile forces vulnerable, for example. More advanced ocean sensing capabilities can improve antisubmarine warfare, raising risks to these key deterrence assets that leaders may worry about or seek to exploit. AI may contribute to cyber opportunities and vulnerabilities involving preemptive attacks on nuclear command and control that seek to interrupt the capacity to launch forces – so-called “left of launch” attacks.Footnote 29 AI improvements in conventional strike capabilities could make them more threatening to nuclear assets of the other side. For the foreseeable future, AI is unlikely to cause massive vulnerabilities that would invite disarming nuclear strikes, but AI-induced increases in the vulnerability of deterrent forces could produce “destabilizing uncertainty” in the minds of decisionmakers (Winter-Levy & Lalwani, Reference Winter-Levy and Lalwani2025). It is imaginable, in short, that the integration of AI into nuclear force postures could result in vulnerabilities and instabilities, real or perceived, that create potential incentives for nuclear use.
In sum, the AI-nuclear weapons nexus raises the most immediate existential concerns related to the rise and adoption of AI. Its integration into the nuclear postures of nuclear armed states is happening and appears inevitable (Rautenbach, Reference Rautenbach2022). This development reinforces the importance of carefully examining the role of AI in resort-to-force decision making.
6. This special issue
Against this backdrop of change and uncertainty, the contributors to this Special Issue come together from diverse disciplinary backgrounds – political science, international relations (IR), physics, law, computer science, engineering, sociology, philosophy, and mathematics – to interrogate the risks and opportunities that AI-driven systems would bring to resort-to-force decision making. In doing so, they each reflect on particular challenges situated within one of the “four complications” that we have suggested would accompany the infiltration of AI-enabled systems into deliberations over whether and when to wage war (Erskine & Miller, Reference Erskine and Miller2024a, pp. 139–40).Footnote 30 These complications frame the broader project from which this collection has emerged.
Complication 1 is concerned with the displacement of human judgment in AI-driven resort-to-force decision making and what this would mean for both the unintended escalation of conflict and deterrence theory. Simply, whether autonomously calculating and implementing a response to a particular set of circumstances, or generating outputs to aid human decision makers, AI-driven systems will neither “think” (an impossibility for such systems) nor behave as human actors do. This could be profoundly consequential. When faced with an adversary, a state’s interpretation of that adversary’s likelihood of engaging in armed conflict is based on the assumption that such choices are guided by human deliberation, human forbearance, human fears, and human practices and rituals of communication (such as signalling to de-escalate conflict, for example). The use of AI-driven systems can cause a disjuncture between this assumption and reality. This is perhaps most acute in cases of automated self-defense, when machines are temporarily delegated the authority to both generate and implement responses to indications of aggression without human intervention (and do this at speeds beyond the capacity of human actors, thereby accelerating decision cycles). However, it is also a risk when AI-enabled systems produce diagnostics and predictions that inform human decision making – particularly when human actors accept these recommendations without question due to their tendency to defer to machines, or when accelerated decision cycles diminish human oversight and elevate algorithmic calculations. Both cases can result in unintended escalations in conflict and, moreover, threaten to undermine assumptions relied upon for effective deterrence.
Turning to state-owned autonomous systems that could independently engage in actions that would constitute the resort to war, Deeks (Reference Deeks2026) addresses aspects of this first complication. Specifically, she constructs three scenarios in which such systems erroneously initiate war – the “Flawed System,” the “Poisoned System,” and the “Competitive AI System” – and asks what standard of care under international law a state must have adhered to in order to avoid international responsibility. In the context of this important analysis, she hypothesizes sobering instances of unintended escalation between autonomous underwater vehicles (AUVs) leading to war, including one scenario where she suggests that a contributing factor to such escalation is the autonomous system’s inability to decipher the signal that a warning shot is meant to convey (Deeks, Reference Deeks2026, p. 3). Continuing this theme of misperception and unintended escalation in an AI-influenced strategic environment, Zatsepina (Reference Zatsepina2026) confronts the integration of AI in nuclear decision-making processes and its implications for deterrence strategies. Revisiting the precarity of the “AI-nuclear weapons nexus” discussed in Section 5, she highlights the problem of AI adding speed, opacity, and algorithmic biases to decision making, thereby radically disrupting the realist assumption that the threat of mutual destruction will motivate rational actors to behave with caution. Finally, also motivated by a particular concern for the impact of AI-enabled systems on deterrence and escalation dynamics, Assaad and Williams (Reference Assaad and Williams2026) turn specifically to decision-support systems, which they argue add complexity to the already-complex decision-making processes that war initiation entails. They identify three dimensions of complexity that current developments in AI are likely to introduce to resort-to-force decision making in the near future – “interactive and nonlinear complexity,” “software complexity,” and “dynamic complexity” – and explore how each poses risks that must be addressed.
Complication 2 takes us from the consequential differences between machines and human actors in resort-to-force decision making to the relationship that human actors have with AI-driven systems and the effects of their interaction. This complication brings the discussion back to the detrimental impact of automation bias, or the tendency to accept computer-generated outputs – a tendency that can make human decision makers less likely to use (and maintain) their own expertise and judgment. It thereby highlights the dangers of deference and deskilling – including “moral deskilling” (Vallor, Reference Vallor, Podens, Stinissen and Maybaum2013) – that human actors face when they rely on AI-driven decision systems. It also raises questions of trust: the degree to which human actors trust AI-driven systems, why, and even whether trust is appropriate or warranted.
Adopting a “just war” perspective to assess the impact of AI-enabled decision-support systems on resort-to-force decision making, Renic (Reference Renic2026) cautions that such systems degrade the moral character of the human actors who rely on them. Specifically, these decision-support systems prevent human decision makers from cultivating moral and political wisdom and what he calls a “tragic sensibility” – dispositional qualities that are needed to navigate ethical decision making and “ask the right questions” in the context of resort-to-force deliberations. Adopting a more optimistic perspective on the prospect of such “human-machine teaming,” Vold (Reference Vold2026) suggests that AI-enabled decision-support systems might augment rather than degrade the judgment and reasoning of commanders engaged in strategic deliberations. She maintains that potential opportunities lie in non-autonomous AI-enabled systems enhancing strategic reasoning in the war-room by limiting and prioritizing possible courses of action, enhancing the mental modelling and visualization capacities of human actors, and mitigating the negative effects of specifically human vulnerabilities, such as emotional and physical fatigue. Shifting the analysis from the anticipated risks and benefits of such reliance on AI-enabled systems to the perceptions of those senior officers who would constitute the human side of the “human-machine team,” Lushenko (Reference Lushenko2026) asks what shapes military attitudes of trust in AI used for strategic-level decision making. Based on an elite sample drawn from the US military, Lushenko provides the first experimental evidence of how varying features of AI can inform servicemembers’ trust when faced with the prospect of partnering with AI-driven systems in the context of strategic deliberations.
Complication 3 turns to algorithmic opacity and complexity and their potential implications for democratic and international legitimacy and questions of accountability. When it comes to the decision to initiate (or continue fighting) a war, a state is expected to offer – and be able to offer – a compelling justification of its decision to do so, generally one that is consistent with international norms and laws. This expectation is readily apparent in judgments by the international community of Russia’s war in Ukraine and Israel’s bombardment of Gaza, for example, and, indeed, in the respective attempts by these states to justify their actions. This expectation is also evident in retrospective evaluations of the United States’ and the United Kingdom’s espoused justifications for the 2003 invasion of Iraq, including the United Kingdom’s own inquiry into, and subsequent reassessment of, its original rationale.Footnote 31 Yet, what if (near-) future decisions to wage war are informed by AI-enabled systems in the ways we have described? Recall that the ML techniques that drive the autonomous and non-autonomous systems that we have suggested will increasingly infiltrate resort-to-force decision making are known for their opacity and unpredictability. Whether these “black box” systems independently calculate and carry out courses of action that would constitute a resort to force, or play a dominant role in informing what are ostensibly human decisions, the lack of transparency and auditability of their ML-generated outputs becomes profoundly problematic. Simply, a state going to war may be unable either to explain or defend this decision adequately – or, indeed, fully understand it. In addition, the AI-enabled decision-making process might seem somewhat beyond the control of the state if this process relies on external actors, interests, and systems. Each of these factors could undermine the perceived legitimacy of the state’s decision to resort to force – vis-à-vis both the international community and its own citizens – and, moreover, create a crisis of accountability if the initiation of armed conflict is condemned as illegal or unjust.
Anticipating just such a problem of accountability within Western democracies if AI disrupts state-level resort-to-force decision-making processes, Logan (Reference Logan2026) observes that bureaucracies within these states produce documents when they deliberate over the initiation of armed conflict (such as records of meetings, for example), which can then be appealed to in the wake of unjust and illegal wars in order to hold relevant decision makers to account. Yet, will it be possible to answer questions of accountability surrounding the decision to go to war when AI is inserted and replaces documented processes with epistemic uncertainty? Logan addresses the resulting “problem of accountability” in a strikingly novel way. Rather than advocating for the transparency of ML processes, she makes the case for an alternative model of uncovering accountability in the context of AI-informed decisions, one that draws on the rules of evidence in international criminal law and allows the apportioning of accountability in the absence of both internal decision-making records and epistemic certainty.
Also concerned with challenges of accountability, Osoba (Reference Osoba2026) asks how it is possible to manage the increased complexity that will accompany the integration of AI into resort-to-force decision making. Following an incisive analysis of the new “AI-human hybrid military decision-making organizations” central to these deliberations, he turns to the challenge of governing such bodies. Osoba (Reference Osoba2026, p. 1) emphasizes that “[t]he stability of the international political ecosystem requires responsible state actors to subject their national security operations (including the decision to resort to war) to international norms and ethical mandates” and makes a case for “a minimal AI governance standard framework,” which he argues will provide the societal control of complex AI-augmented decision systems necessary for such compliance (and, we would add, international legitimacy). This can, he maintains, produce an “organisational culture of accountability.” Similarly concerned with the complexity that accompanies the emerging influence of AI and big data on political and military decision making, Hammond-Errey (Reference Hammond-Errey2026) introduces the valuable notion of “architectures of AI” to describe the infrastructure that shapes contemporary security and sovereignty. While acknowledging the imperative, addressed by Logan and Osoba, to subject states to scrutiny for their decisions on the resort to force, she offers a different perspective on how AI-augmented decision making could threaten states’ perceived legitimacy. Bringing the discussion back to the US-based “tech leviathans” discussed in Section 3, she convincingly argues that current AI architectures concentrate power amongst these companies, which opens up the possibility of Big Tech effectively undermining the autonomy of states that rely on AI-enabled systems to make decisions on the resort to force.
Complication 4 addresses how AI-enabled systems interact with, and constitute, formal organisations such as states and military institutions, along with the effects of this interaction and constitution. Such effects include the possibility of AI-enabled systems variously enhancing and eroding the capacities of these institutions, distorting or disrupting strategic decision-making processes, organizational structures, and chains of command, and even exacerbating organisational decision-making pathologies. These effects also encompass changes made to institutional structures to prevent or mitigate risks and disruptions caused by the steady infiltration of AI-enabled systems in the state’s decision-making processes. Importantly, this complication entails a conceptional shift – one that focuses on the organisation as a whole. In other words, while Complication 2 addresses the consequences of the interactions between individual human actors and the AI-enabled systems that they rely on, Complication 4 focuses on the relatively neglected phenomenon of organisation-machine interaction and its consequences.
In their contribution, Erskine and Davis (Reference Erskine and Davis2026) argue that the integration of AI-enabled systems into state-level decision making over whether and when to wage war will engender subtle but significant changes to the state’s deliberative and organisational structures, its culture, and its capacities – and in ways that could undermine its adherence to international norms of restraint. In making this provocative claim, they propose that the gradual proliferation and embeddedness of AI-enabled decision-support systems within the state – what they call the “phenomenon of ‘Borgs in the org’” – will lead to changes that, together, threaten to lessen the state’s capacity for institutional learning. They warn that, as a result, the state’s responsiveness to external social cues and censure will be weakened, thereby making the state less likely to engage with, internalize, and adhere to international constraints regarding the resort to force. Müller et al. (Reference Sienknecht2026) likewise address the way that the integration of AI-enabled systems into state-level deliberations over the resort to force will change existing organisational and decision-making structures. They focus specifically on an often neglected, yet profoundly important, group within this emerging social-technical system: “the integrators,” or that component of the larger decision-making structure responsible for linking the otherwise disparate groups of “developers” and “users” in the context of political and military decision making. Providing a valuable conceptualization of the relationships amongst these different groups of actors and AI-driven decision systems, they demonstrate how structures change and are re-constituted by the introduction of AI into new political and military deliberative processes.
While the previous two contributions analyse how bringing AI into decision making on the resort to force will transform the state’s organisational and decision-making structures, the final two articles both prescribe institutional change to preempt anticipated risks that would accompany reliance on such systems. Yee-Kuang Heng (Reference Heng2026) addresses how AI-driven decision-support systems that would assist human calculations on the resort to force raise concerns that automation bias and deskilling may contribute to the dangerous erosion of human judgment. He thereby touches on concerns addressed within Complication 2, including by Renic (Reference Renic2026). Nevertheless, he is interested specifically in how this risk is compounded by the complexities and pathologies of organisational decision making, bringing his analysis squarely within the current complication. Rather than focusing on improving AI models, he adopts an “institutional approach” to advocate for governments providing training for human decision makers, questioning “groupthink“, and restructuring their institutional settings in order to minimize these risks. Finally, like Heng, Sienknecht (Reference Sienknecht2026) argues that risks posed by the prospect of AI-enabled systems infiltrating resort-to-force decision making should prompt proactive changes to the state’s organisational structure. The risk that Sienknecht confronts is the “sociotechnical responsibility gap” brought on by the inability to attribute decisions made or informed by AI-enabled systems to either the machines themselves or any one individual. If such systems contribute to high-stakes political and military decision making, such as deliberations over whether and when to wage war, she argues that we must establish “proxy responsibility” relations by which designated actors assume responsibility for these otherwise unattributable actions. To make this possible, Sienknecht proposes what she describes as an “institutional response”: an advisory “AI oversight body” that would create the structural conditions for such attributions of proxy responsibility relations.
This collection of research articles raises problems and possibilities that accompany what we have argued is the imminent and inevitable (even nascent) direct influence of AI-enabled systems on one of the most consequential decisions that can be made: the decision to go to war. The questions and corresponding analyses provided in this collection are intended as prompts, provocations, and provisional frameworks for future research on this crucial topic. We hope that the multidisciplinary discussions, debates, and reciprocal learning from which these contributions have emerged continue in responses to them – and in further critical engagement with the prospect of AI-driven systems increasingly informing state-level decisions on the resort to force.
Funding statement
This project was supported by the Australian Government through a grant from the Department of Defence.
Competing interests
The authors declare none.
Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU). She is also Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University, Chief Investigator of the “Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making” Research Project, funded by the Australian Government through a grant by Defence, and recipient of the International Studies Association’s 2024-25 Distinguished Scholar Award in International Ethics.
Steven E. Miller is Director of the International Security Program in the Belfer Center for Science and International Affairs at Harvard University’s Kennedy School of Government. He also serves as Editor-in-Chief of the scholarly journal, International Security, and co-editor of the International Security Program’s book series, BCSIA Studies in International Security (published by MIT Press). He is a Fellow of the American Academy of Arts and Sciences, where he is member of (and formerly chaired) the Committee on International Security Studies (CISS). He is also member of the Council of International Pugwash.