1. IntroductionFootnote 1
Among the foundational perspectives on stability and deterrence in a nuclear-armed world, Kenneth Waltz’s optimistic take on nuclear proliferation remains one of the most enduring. Waltz (Reference Waltz1990, p. 737) argued that nuclear weapons are the ultimate guarantors of a state’s survival as they induce caution and restraint. In his view, more nuclear weapons could enhance strategic stability between superpowers. Although nothing is guaranteed, Waltz (Reference Waltz1979, p. 185) argued that nuclear weapons make wars less likely as they are associated with deterrent strategies that promise less damage than war-fighting strategies. These strategies call for caution and hence reduce the incidence of war. The fear of escalation and the disadvantages of striking first mean that nuclear weapons reverse the logic of conventional wars. Thus, based on “easy calculations of what one country can do to another,” rational actors will avoid armed nuclear conflicts at all costs (Waltz, Reference Waltz1990, p. 734). These assumptions underpinned what some scholars have called the First Nuclear Age (Futter & Zala, Reference Futter and Zala2021, p. 260). Beginning in 1945, it brought forth an era of unprecedented strategic calculus, one dominated by great power competition and deterrence based on the condition of mutually assured destruction. For Waltz and others, the immense destructiveness of nuclear weapons and the fear they induced became cornerstones of strategic stability – understood here as the absence of incentives for nuclear first use and the absence of incentives to engage in nuclear arms racing (Acton, Reference Acton2013) – shaping decades of nuclear politics and international relations: “The probability of major war among states having nuclear weapons approaches zero” (Waltz, Reference Waltz1988, p. 627).
Following the Cold War, the so-called Second Nuclear Age saw new challenges emerge, including additional threats from non-state actors and the rise of Asian military powers (Bracken, Reference Bracken2000, p. 146). Today, as we are said to be entering a Third Nuclear Age, a new set of pressures threatens to upend the strategic stability assumed by earlier deterrence frameworks. These include, inter alia, multipolarity, emerging technologies, the development of strategic non-nuclear weapons (SNNW) and precarious conditions for advancing arms control and disarmament (Crilley, Reference Crilley2023; Favaro et al., Reference Favaro, Renic and Kühn2022; Futter & Zala, Reference Futter and Zala2021; Zala, Reference Zala2024). The rapid advancement and potential integration of artificial intelligence (AI) into the nuclear weapons domain are among these contemporary anxieties and challenges.
As AI technologies increasingly permeate military strategies, they introduce complexities and uncertainties that challenge the assumption that human rationality can ensure control and restraint in nuclear decision making (Depp & Scharre, Reference Depp and Scharre2024; Horowitz, Reference Horowitz2018; Johnson, Reference Johnson2021, Reference Johnson2022, Reference Johnson2023; Kroenig, Reference Kroenig2021; Nadibaidze & Miotto, Reference Nadibaidze and Miotto2023; Parke, Reference Parke2023; Zala, Reference Zala2024). The automation of threat detection, acceleration of decision timelines and potential deployment of autonomous weapons systems amplify the risks of miscalculations, misperceptions and inadvertent escalations (Johnson, Reference Johnson2023, p. 73; Price et al., Reference Price, Walker and Wiley2018, pp. 101–102). Within this context, the integration of AI into states’ nuclear deterrence architecture, particularly early warning systems; nuclear command, control and communications (NC3); and intelligence, surveillance and reconnaissance (ISR), constitutes a critical yet relatively neglected discourse on AI’s influence over resort-to-force decision making (Erskine & Miller, Reference Erskine and Miller2024, p. 137). AI’s role in shaping state decisions to go to war introduces both significant risks and potential opportunities, which calls for a thorough examination of its impact on deterrence theory and future crisis management (Erskine & Miller, Reference Erskine and Miller2024, p. 139). Moreover, a critical question arises: how will the integration of AI affect stability in a world where major nuclear powers do not share the same conceptual foundation for deterrence?
In this article, I retain Waltzian logic to question whether more AI equals more stability, but challenge the notion that technological advancement leads to restraint. I argue that the integration of AI into nuclear decision-making processes risks undermining the rational deterrence logic that has historically underpinned strategic stability. The main contribution of this article is to demonstrate that divergent understandings of deterrence, particularly between Russia and the West, are likely to be amplified rather than resolved by AI-driven systems. When such conceptual gaps are embedded in opaque, rapid and data-driven technologies, the risk of misperception, miscommunication and inadvertent escalation grows. Adopting a realist-informed, but critically reflective, approach and using Russia as a case study, this article shows how conceptual differences in deterrence thinking complicate crisis management and challenge the foundational assumptions of nuclear restraint in the Third Nuclear Age. This article contributes to the ongoing debates on AI and resort-to-force decision making by foregrounding conceptual divergence as a key source of risk and uncertainty. Moreover, it explores how the integration of AI may intensify the security dilemma and fuel arms race dynamics, as states seek to pre-empt, catch up with, or outmatch adversaries.
This article adopts a qualitative, interpretivist methodology grounded in discourse analysis. Russia is selected as a case study because it represents a particularly distinct and influential alternative to Western deterrence logic, grounded in a tradition of coercive signalling, reflexive control, doctrinal ambiguity and early escalation. It draws on a close reading of official Russian security and military doctrine, statements by political and military leaders and secondary literature on Russian strategic thought and deterrence theory. The analysis is conceptual in nature and does not make empirical claims about the operational integration of AI into Russia’s nuclear decision-making processes. Instead, it traces how underlying deterrence logics may shape the development and use of AI-enabled decision support tools in the nuclear domain.
By focusing on the epistemological foundations of deterrence, the article seeks to illustrate how divergent strategic assumptions may undermine shared understandings in crisis situations, particularly when automated or mediated through AI systems. Johnson (Reference Johnson2020b) cautions that any discussion of emerging technologies, such as AI, necessarily involves a degree of speculation, given the limited empirical data on how AI might influence deterrence, escalation and crisis decision making in real-world nuclear contexts (p. 426). While insights from strategic war gaming, nuclear decision-making simulations and expert analysis offer valuable contributions, much of the current discourse remains conceptual. This article reflects that epistemic uncertainty. It does not aim to predict outcomes with certainty, but rather to illuminate underexplored risks stemming from divergent deterrence logics and to encourage more nuanced thinking about the conditions under which AI integration might destabilise resort-to-force decision making.
To advance my argument, I first examine how AI challenges traditional nuclear deterrence frameworks by introducing new risks such as misperception, acceleration of decision timelines, and inadvertent escalation. I also consider its limited stabilising potential. I then examine Russian deterrence thinking to illustrate how divergent strategic logics, when combined with AI integration, can intensify uncertainty and raise the likelihood of crisis instability. The third section explores how AI is fuelling a new arms race and focuses on how doctrinal asymmetries, the erosion of arms control, and recursive technological competition create a volatile strategic environment and security dilemma. I conclude by stressing the need for renewed transparency, confidence-building measures, and regulatory frameworks to mitigate the risks posed by AI’s integration into nuclear decision making.
2. AI in nuclear deterrence
International Relations scholars have spent decades debating the flaws of rational choice theories and neorealist thinking during the Cold War (see, e.g., Hymans, Reference Hymans2006; Jervis, Reference Jervis1989; Knopf, Reference Knopf2010; Kroenig, Reference Kroenig2015; Lebow & Stein, Reference Lebow and Stein1989; Sagan, Reference Sagan1996; Tannenwald, Reference Tannenwald2007). Changes to the geopolitical and technological landscape, as well as the emergence of new security threats and domains, make current deterrence theorising an increasingly complex endeavour (Johnson, Reference Johnson2023, p. 76). Yet there is a need to acknowledge the undeniable fact that there has been no nuclear exchange during the Cold War or in the present circumstances, despite the ongoing war in Ukraine and increasingly hostile relations between Vladimir Putin’s Russia and the North Atlantic Treaty Organization (NATO) states. Nuclear-armed states continue to rely on the principles of deterrence to maintain strategic stability. This reliance suggests that, flawed as it may be, deterrence theory continues to provide a practical – if contested – logic of restraint in nuclear politics, or at the very least, that states behave as though these principles are still effective.
If stable deterrence is the condition to be preserved, the integration of AI into nuclear deterrence architectures poses significant challenges and uncertainties. Zala (Reference Zala2024) notes that placing AI developments within the context of a larger shift towards a Third Nuclear Age can help generate a more nuanced understanding of the primary drivers of stability and instability in the current environment (p. 155). He identifies the loss of human oversight in nuclear and SNNW decision making, as well as machine-informed human decision making, as two areas that contain multiple risks to the stability of deterrence (pp. 156–158). These risks are especially salient given the compressed decision timelines, algorithmic biases, and possible overconfidence in AI-generated recommendations, which may compound misperceptions and increase the likelihood of escalation (Wong et al., Reference Wong, Yurchak, Button, Frank, Laird, Osoba, Steeb, Harris and Bae2020, p. 83).
A key concern lies in the historical opacity of nuclear decision making. While we are aware of high-profile incidents such as Able Archer 83 and the 1983 Petrov incident, these represent only a fraction of the potential close-calls and decision-making scenarios involving nuclear weapons that may remain classified or undisclosed. Lewis, Williams, Pelopidas and Aghlani (Reference Lewis, Williams, Pelopidas and Aghlani2014) examine some of these historical cases and more recent incidents, arguing that the risks of inadvertent nuclear use are higher than previously thought and calling for vigilance and prudent decision making (p. 30). The secrecy surrounding these events and the absence of systematic data hamper our ability to understand the real drivers of restraint and escalation. AI can only be as useful or accurate as the underlying data, and unknowns are difficult to include in its modelled assumptions (Holmes & Wheeler, Reference Holmes and Wheeler2024, p. 170). Its recommendations based on systematically incomplete data around nuclear scenarios might therefore lead to biased decision making, but without the awareness that significant gaps exist. Arguably, allowing AI to partake in these decisions to reduce human error and emotion, while increasing the rational use of (incomplete) data, may set an even more dangerous precedent (Johnson, Reference Johnson2022, p. 359). Ironically, in this case, optimising rationality could leave us in a more irrational and precarious situation.
This article focuses primarily on inadvertent escalation, which occurs when one side’s intentional actions are unintentionally perceived as escalatory, often due to misperceptions about an opponent’s likely reaction (Johnson, Reference Johnson2022, pp. 340–341). It is important to distinguish this from accidental and deliberate escalation. Accidental escalation is also unintended but results from events that were not planned or foreseen, such as accidents or technical malfunctions (Morgan et al., Reference Morgan, Mueller, Medeiros, Pollpeter and Cliff2008, p. 26). Deliberate escalation, by contrast, involves one side intentionally crossing an escalatory threshold and viewing such actions as rational or strategically necessary (Hoffman & Kim, Reference Hoffman and Kim2023, p. 18). AI could plausibly influence all three types of escalation. For instance, systems trained on inadequate data or applied to inappropriate contexts could inject flawed information into decision-making processes and raise the risk of accidental escalation (Hoffman & Kim, Reference Hoffman and Kim2023, pp. 18–19). Deliberate escalation may also be shaped by AI-generated recommendations that promote coercive or escalatory actions, particularly under time pressure or uncertainty (Hoffman & Kim, Reference Hoffman and Kim2023, p. 20). However, this article concentrates on inadvertent escalation, which represents a distinct and pressing risk in an AI-integrated environment. Here, the danger does not stem from technical failure but from systems operating as intended while misinterpreting signals (Johnson, Reference Johnson2022, p. 358).
If AI systems increasingly take on roles traditionally managed by human decision makers, the potential for miscommunication and misinterpretation grows. Reducing human oversight diminishes the capacity for ethical judgement and contextual interpretation, while increasing reliance on “black box” systems that may be vulnerable to cyber intrusion or failure (Johnson, Reference Johnson2023, p. 79). In a potential nuclear crisis, the already tight timescales for deciding whether to launch a nuclear strike will become even more compressed, placing additional pressure on leaders and increasing the risk of miscalculation (Parke, Reference Parke2023). Such vulnerabilities could be exploited by adversaries and potentially trigger unintended confrontations through AI-driven responses, especially in the absence of robust crisis management mechanisms to de-escalate the situation (Hoffman & Kim, Reference Hoffman and Kim2023, pp. 16–17).
Against this backdrop, some scholars have argued that AI, if properly developed and constrained, could theoretically stabilise deterrence relationships. For example, Black et al. (Reference Black, Eken, Parakilas, Dee, Ellis, Suman-Chauhan, Bain, Fine, Aquilino, Lebret and Palicka2024) argue that AI systems, when designed and deployed responsibly, have the potential to improve decision making by offering faster and more accurate analyses of complex situations, thereby reducing the likelihood of misinterpreting an adversary’s actions (pp. 49–52). Boulanin (Reference Boulanin and Boulanin2019) similarly suggests that advanced AI models could avoid past technical failures by improving the reliability of threat detection (p. 54). Others point to the potential use of AI in ISR and manoeuvre planning as a means to deter first strikes by improving detection and defensive posturing (Zala, Reference Zala2024, p. 159). Moreover, Holmes and Wheeler (Reference Holmes and Wheeler2024) propose that AI could enhance strategic communication and high-level diplomacy by stimulating a broader range of scenarios and helping actors better understand adversaries’ motives and potential red lines (p. 167). In a similar vein, McDonnell et al. (Reference McDonnell, Chesnut, Ditter, Fink, Lewis and Westerhaug2023) suggest that AI-based simulation and training tools could be used to train political and military leaders and their staff to make decisions in times of war and crisis and to practice navigating difficult choices (p. 34). In theory, this capability could augment human-to-human communication and reduce ambiguity during moments of heightened tension. AI might also assist arms control verification efforts and contribute to improving compliance and transparency (Schörnig, Reference Schörnig2022).
However, all these stabilising scenarios rely on ideal conditions such as transparent algorithms, commonly agreed protocols, and mutual trust, which are seldom met in practice. In the context of the Third Nuclear Age, characterised by deep mistrust, multipolarity and the erosion of arms control and non-proliferation norms, AI-enhanced systems are far more likely to become sources of suspicion than instruments of collaboration. Thus, while AI might offer theoretical stabilising functions, the weight of evidence and contextual analysis points towards its destabilising effects. Without robust oversight, interoperable systems or mutual trust, the same characteristics that promise stability – speed, precision and automation – risk becoming liabilities (McDonnell et al., Reference McDonnell, Chesnut, Ditter, Fink, Lewis and Westerhaug2023, pp. 36–37). In other words, we return to the Waltzian logic: does more AI equal more stability? Under current conditions, the answer is no.
This risk is further complicated by divergent conceptual understandings of “deterrence.” States do not define or operationalise deterrence in identical ways. These frameworks shape how AI systems are designed and deployed. Johnson (Reference Johnson2020a) observes that AI’s impact on escalation and deterrence is influenced less by technical capacity than by how its purpose is perceived (p. 16). In other words, what AI is designed to deter or signal depends on the deterrence logic of the state employing it. This has implications for how its recommendations are interpreted and responded to by adversaries.
Work on alternative deterrence traditions has shown that some states adopt more punitive or coercive models of deterrence, while others lean towards defensive or normative forms (Adamsky, Reference Adamsky2018, Reference Adamsky2024; Charap, Reference Charap2020; Chase & Chan, Reference Chase and Chan2016; Veebel, Reference Veebel2021; Ven Bruusgaard, Reference Ven Bruusgaard2016). For example, Veebel (Reference Veebel2021) argues that while Western conceptions of deterrence often rely on normative frameworks and economic statecraft (e.g., sanctions), Russia’s approach is more coercive and militarised, reflecting divergent understandings of how threats and responses should be structured. Adamsky (Reference Adamsky2018) similarly shows how Russian deterrence discourse is rooted in its strategic culture and historical emphasis on pre-emption and escalation control, noting that strategic cultures are not universally shared or understood (pp. 34–35). When these models interact, especially in a high-tech environment, the epistemological foundations of deterrence and interpretations may diverge.
These conceptual differences raise important questions about how AI systems, shaped by distinct deterrence logics, may misread signals and contribute to unintended escalation. AI systems trained and implemented according to different strategic doctrines may not interact in predictable or stabilising ways. The growing literature on the “conceptual multiplicity” of deterrence (see, e.g., Adamsky, Reference Adamsky2024; Johnson, Reference Johnson2020b; Ven Bruusgaard, Reference Ven Bruusgaard2024) suggests that AI’s effects must be examined within the interpretive frameworks that guide strategic thinking in each state.
This article contributes to this debate by explicitly foregrounding conceptual divergence as a key source of risk and uncertainty in the AI-nuclear nexus. Rather than treating deterrence as a stable or universal framework, it situates AI integration within a world of epistemological friction, where deterrence is interpreted, operationalised and potentially automated in different ways.
3. Russia’s deterrence logic and the AI challenge
Russian approaches to containing, deterring, and inflicting varying levels of damage on adversaries are commonly grouped under the umbrella of “strategic deterrence” (Charap, Reference Charap2020). This military-theoretical concept, first formally introduced in the 2010 Military Doctrine, refers to a continuous, multidomain activity designed to influence adversary behaviour in both wartime and peacetime through a combination of military and non-military measures (Akimenko, Reference Akimenko2021, p. 2; President of Russia, 2010). The Russian Ministry of Defence’s Military Encyclopaedia defines “strategic deterrence” as “a coordinated system of use-of-force and non-use-of-force measures taken consecutively or simultaneously by one side in relation to another to keep the latter from any military actions that inflict or may inflict damage on the former on a strategic scale” (cited in Akimenko, Reference Akimenko2021, p. 3).
The concept has continued to evolve in subsequent doctrinal texts and in Russian strategic analysis and military thought. The 2014 Military Doctrine broadened strategic deterrence to include not only nuclear weapons but also precision-guided conventional weapons, information operations, and threats emerging in new domains such as cyberspace and outer space (Security Council of the Russian Federation, 2014). The 2020 Fundamentals of State Policy on Nuclear Deterrence added further operational detail by clarifying conditions for nuclear use, highlighting the role of early warning and command structures, and reinforcing Russia’s right to respond to non-nuclear threats that endanger state survival (President of Russia, 2020). Most recently, the 2024 Fundamentals reaffirmed the integration of nuclear and non-nuclear means, emphasising uncertainty, situational adaptation, and the role of information confrontation in shaping deterrence credibility (President of Russia, 2024). Reflecting this evolution, Dmitri Trenin, a prominent Russian expert and member of Russia’s Foreign and Defence Policy Council, writes, “Strategic deterrence includes a military component (nuclear and conventional), a spatial dimension (geopolitics, geoeconomics, and other functional domains such as the cyber environment, space, etc.), and a coalition component (cooperation with friendly states). The complexity and systemic nature of strategic deterrence are essential conditions for its effectiveness” (Trenin, Reference Trenin2024).
This expanding doctrinal scope points to a deeper conceptual divergence between Russian and Western understandings of deterrence. The Russian term for strategic deterrence – strategicheskoe sderzhivanie – is more accurately translated as “strategic restraining” or “holding back,” placing emphasis on limiting, restraining, or pre-empting aggressive action by an adversary (Ven Bruusgaard, Reference Ven Bruusgaard2016, p. 8). This emphasis on restraining marks a significant conceptual and etymological departure from the English-language notion of deterrence, which centres on threatening a response to dissuade hostile acts. As Kofman (Reference Kofman2024) notes, while Western deterrence emphasises the threat of action, Russian sderzhivanie implies the initiation of proactive or preventive measures. Adamsky (Reference Adamsky2024) reinforces this interpretation:
The connotation of concentrated effort, proactive endeavor, and preemptive action, which underlies the meaning of Russian sderzhivanie, is more straightforward, embedded, and explicit than in the case of English deterrence, where fear is the central motif; in the latter the threat of the use of force is implicit rather than explicit, almost spelled out, as it is in sderzhivanie. (p. 30)
Russia’s concept of strategic deterrence is significantly broader and more comprehensive than traditional Western understandings of deterrence (Fink, Reference Fink2023, p. 11; Ven Bruusgaard, Reference Ven Bruusgaard2016, p. 8). It integrates military and non-military, offensive and defensive, nuclear and conventional measures into a holistic framework aimed at shaping adversary decision making. This strategy spans a wide spectrum of actions from diplomatic signalling to the potential use of nuclear weapons and is designed to deter or coerce by presenting credible threats across multiple domains (Kofman et al., Reference Kofman, Fink and Edmonds2020, pp. 12–16).
A defining feature of Russia’s strategic deterrence approach is its emphasis on psychological and informational effects, as well as the cultivation of ambiguity around escalation thresholds (Fink, Reference Fink2017). The updated 2024 Fundamentals document reiterates the integration of non-nuclear and nuclear capabilities into a unified deterrence framework and highlights the importance of maintaining the adversary’s “uncertainty regarding the scale, time and place of retaliatory action” (President of Russia, 2024). The document further underlines the role of conventional precision-strike weapons, information confrontation, and situational adaptation as core components of deterrence strategy. These developments reinforce a deterrence logic that privileges ambiguity and favours proactive escalation management, departing sharply from the Western preference for transparency and mutual vulnerability (Fink, Reference Fink2017).
Such a framework poses challenges for Western states, which often misread or mischaracterise Russian deterrence behaviour. As Adamsky (Reference Adamsky2018) observes, “the West engages Moscow with only a vague understanding of the conceptual foundations and perception of Russian strategists” (p. 56). This is compounded by what Wachs (Reference Wachs2023) describes as the tendency of Western analysts to “mirror-image” Russian strategic thinking, presuming universality in concepts of security and deterrence that do not align (p. 175). Kofman and Fink (Reference Kofman and Fink2020) highlight this disconnect by pointing to persistent Western assumptions that Russia exploits a “yield gap” to create escalation dilemmas, despite the absence of such logic in the Russian Military Doctrine. In reality, Russia’s use of lower-yield nuclear options is embedded in an escalation management strategy aimed at preserving flexibility in regional theatres, not at provoking unresolvable dilemmas for Washington. Kofman and Fink (Reference Kofman and Fink2020) explain: “Russian strategy has not been based on the premise that the United States is hamstrung by an asymmetry of yields,” but rather on the understanding that U.S. interests in such conflicts are limited and geographically distant, which are factors that constrain its willingness to escalate.
Such doctrinal mismatches are already dangerous in human-to-human dynamics. But I argue that when automated or AI-supported systems begin interpreting adversary signals through mismatched frameworks, the likelihood of misperception and inadvertent escalation increases significantly. AI systems trained on flawed or incomplete assumptions about adversary doctrine may amplify, rather than mitigate, existing tensions.
Russian political and military leaders have repeatedly framed AI not merely as a technological innovation but as a critical enabler of national power and strategic sovereignty. President Putin’s frequently cited 2017 statement that whoever leads in AI will become “the ruler of the world” is emblematic of this mindset (The Economist, 2024). Subsequent official rhetoric has gone further, emphasising AI’s specific role in national defence, deterrence, and information confrontation (Bendett, Reference Bendett2024, pp. 3–4). For example, at a Defence Board Meeting in December 2022, Putin emphasised the need to integrate AI technologies across all tiers of military decision making (President of Russia, 2022). The former Minister of Defence Sergey Shoigu also remarked that it was “now necessary to ensure the integration of artificial intelligence technologies into the armaments that will define the future form of the Armed Forces” (cited in Petrov, Reference Petrov2021). Similarly, a group of experts from Russia’s Defence Ministry’s Center for Research of Foreign Countries Capabilities stated in 2021:
In any case, to ensure the security of the Russian Federation, it is necessary to provide support for decision making regarding the use of strategic nuclear forces, definitely using AI as a tool for analysing the dynamically changing geopolitical and military situation and leaving the final decision-making power to the relevant officials. (cited in Shakirov, Reference Shakirov2023, p. 18)
Such remarks reflect more than just technological enthusiasm; they signal an understanding of AI as a force multiplier within Russia’s broader strategic deterrence posture (Clapp, Reference Clapp2025, p. 3).
This vision of AI as a strategic asset is shaped by long-standing Russian ideas about control, information dominance, and psychological pressure in warfare. As scholars such as Adamsky (Reference Adamsky2024, p. 68) and Merriam (Reference Merriam2023, p. 8) have noted, Russia’s military science embraces the principle of “reflexive control,” a process of altering the information environment to shape an adversary’s decisions by influencing their perceptions, strategic choices, and operational behaviour. To be successful, reflexive control requires a deep understanding of how an opponent thinks, processes information, and makes decisions (Thomas, Reference Thomas2019, pp. 4–11). To Russian strategists, reflexive control is not simply about deception or manipulation but about constructing a reality in which an opponent’s behaviour aligns with Russia’s strategic preferences. While it may resemble Western logic of coercion, reflexive control operates through the internal logic of the adversary’s own decision-making process rather than through external threats, which highlights a deeper cognitive approach often overlooked in Western frameworks (Adamsky, Reference Adamsky2024, p. 69).
Building on this cognitive foundation, Russia’s growing interest in applying AI to its nuclear command, control, and decision-making infrastructure reflects a broader effort to increase the speed, accuracy, and resilience of deterrence operations under crisis conditions. As Bendett (Reference Bendett2024) notes, despite the classified nature of much of this work, publicly available information suggests that the Russian military is actively pursuing AI applications across a range of nuclear-related domains – from intelligence and surveillance to command support and battle damage assessment (p. 8). These developments reflect an ambition to leverage AI not only for enhanced situational awareness but also for reducing decision timelines, automating key data processing tasks, and supporting escalation management (Stokes et al., Reference Stokes, Khal, Kendall-Taylor and Lokker2025, pp. 11–12). In this context, AI may offer tools not only for enhanced situational awareness but also for reducing decision timelines, automating data analysis, and supporting escalation management. Yet it also introduces significant risks. When adversaries lack a shared understanding of the frameworks guiding each other’s use of AI, particularly in nuclear contexts, the potential for misreading intentions grows. Rather than stabilising deterrence, AI-enabled reflexive control may increase the likelihood of inadvertent escalation through conceptual misalignment and interpretive asymmetry.
These risks are especially pressing at the command level, where AI is increasingly envisioned as a support mechanism for real-time decision making. According to Bendett (Reference Bendett2024), Russian military planners view AI as a means of synthesising political-military information rapidly during fast-evolving situations at tactical, operational, and strategic levels (p. 6). This may involve modelling crisis trajectories, visualising force postures, and generating decision options for senior leaders. AI applications are also reportedly being explored for integrating diverse data streams, including satellite imagery, sensor feeds, and open-source intelligence, into a unified operational picture (Boulanin et al., Reference Boulanin, Saalman, Topychkanov, Su and Peldan Carlsson2020, pp. 49–51). Such capabilities are seen as crucial for shortening decision timelines and improving the responsiveness of command structures in dynamic or ambiguous scenarios. These tools could become integral to the functioning of Russia’s National Defence Coordination Centre (NDCC), which is already tasked with coordinating national security operations across ministries and military branches (Bendett, Reference Bendett2024, p. 5).
Yet, the very qualities that make AI appealing for managing fast-moving crises – speed, automation, and the reduction of complexity – also introduce serious concerns regarding inadvertent escalation. These same features may narrow the window for reflection and increase the likelihood of misinterpreting adversary intentions (Johnson, Reference Johnson2022, pp. 349–351). In a high-stress, time-compressed environment, a machine-generated recommendation could be seen as authoritative, especially if it aligns with existing biases or appears more objective than human judgement (McDonnell et al., Reference McDonnell, Chesnut, Ditter, Fink, Lewis and Westerhaug2023, pp. 23–24). This is especially problematic given the conceptual mismatches already identified between Russian and Western notions of deterrence. AI systems that process input according to one framework may fundamentally misinterpret the signals or posture changes made by a rival operating from a different logic.
These conceptual discrepancies become even more destabilising when filtered through the lens of AI-enabled decision support. In an AI-driven strategic environment, the risk of misperception is magnified. As Wong et al. (Reference Wong, Yurchak, Button, Frank, Laird, Osoba, Steeb, Harris and Bae2020) note, AI systems can exacerbate the difficulty of interpreting adversary intent, particularly in ambiguous or rapidly changing circumstances (pp. 66–67). The “black box” nature of many machine learning systems, meaning their inability to clearly explain how conclusions are reached, further complicates matters. According to Bellaby (Reference Bellaby2024), AI’s internal reasoning is often opaque even to its operators, raising serious concerns about accountability and trust, especially in nuclear decision-making contexts (p. 2536).
The unpredictability of AI-generated behaviour in high-stakes scenarios adds another layer of risk. In a study of AI agents in war game simulations, Rivera et al. (Reference Rivera, Mukobi, Reuel, Lamparth, Smith and Schneider2024) observed a recurrent tendency towards escalation. In some cases, the AI adopted arms-racing behaviours or even initiated nuclear weapon use, which illustrates how autonomous systems might misinterpret uncertainty or strategic ambiguity as a rationale for pre-emption (p. 836). While such experiments are limited and stylised, they highlight the hazards of entrusting escalation-sensitive decisions to systems that lack contextual understanding and operate under hard-coded assumptions.
These dynamics undermine the theoretical foundations of deterrence, which depend not only on rationality, signalling clarity and mutual understanding but also on a deep grasp of the other side’s interests, perceptions and strategic priorities (Johnson, Reference Johnson2021, p. 424). AI introduces volatility into a system meant to produce caution and restraint. The integration of AI into nuclear planning, early warning, ISR and NC3 architectures across nuclear-armed states intensifies this problem (Johnson, Reference Johnson2021, p. 431). Several experts note that the modernisation efforts underway in Russia, China and the United States increasingly prioritise AI capabilities in ways that extend across their full deterrence architectures (Boulanin et al., Reference Boulanin, Saalman, Topychkanov, Su and Peldan Carlsson2020, ch. 3; Chernavskikh, Reference Chernavskikh2024, pp. 4–5). Each of these developments, however rational in isolation, risks triggering reciprocal actions, fuelling a new kind of arms race not solely about warheads or delivery systems, but about cognitive and informational dominance. In this sense, integrating AI into nuclear decision making not only complicates deterrence but also accelerates a broader technological competition that heightens instability. This evolving landscape bears the distinctive hallmarks of the Third Nuclear Age: a multipolar order, disruptive technologies and increasingly unstable feedback loops between perception and action (Zala, Reference Zala2019, p. 42). Today’s nuclear environment is marked by sharp doctrinal and informational asymmetries between global superpowers. This section has shown that when AI is layered onto these conceptual mismatches, the danger is not only technical malfunction or miscommunication, but the potential for misperception and inadvertent escalation. These dynamics are already intensifying a competitive push among major powers to develop and integrate AI across their nuclear domains setting the stage for a new and potentially more dangerous arms race.
4. Is AI fuelling a new arms race?
Major powers such as the United States, Russia and China are heavily investing in AI research and development and seek to outpace each other in creating advanced AI-driven systems for surveillance, threat detection, cyber operations, autonomous weapons and decision support in nuclear strategies (Bendett, Reference Bendett2024, p. 2; McDonnell et al., Reference McDonnell, Chesnut, Ditter, Fink, Lewis and Westerhaug2023, pp. 8–13). This intensifying technological competition increasingly resembles an arms race, where fear of relative inferiority pushes states to develop and deploy emerging capabilities pre-emptively. As Andersen (Reference Andersen2023) notes: “Any country that inserts AI into its command and control will motivate others to follow suit, if only to maintain a credible deterrent” (p. 15).
Russia’s posture is especially illustrative of this dynamic. While Russian political and military leaders often claim that AI systems will remain under human control, they also frame Western developments in AI and autonomous military systems as direct threats to Russian security and strategic stability and emphasise the need to catch up (Nadibaidze & Miotto, Reference Nadibaidze and Miotto2023, pp. 55–56). Bendett (Reference Bendett2024) notes that Russia already fears lagging behind the United States and China in developing AI-enabled warfare (p. 3). In 2021, Putin called for the development of AI-driven decision support systems in the military sphere and asserted that success in decision making directly depends on speed and accuracy: “it is necessary to develop decision support systems for commanders at all levels, especially at the tactical level, [and] to integrate artificial intelligence technologies into these systems” (Ria Novosti, 2021). In February 2024, he approved a new national strategy for developing AI until 2030 (TASS, 2024). The new document contains 40 additional pages compared to the previous 2019 version. Furthermore, Deputy Minister of Defence Ruslan Tsalikov openly stated that Russia already has sufficient potential to become a global leader in the development and use of AI technologies (Zvezda News, 2021). While the reality of such proclamations can be questioned, the strategic mentality behind them is significant and likely to intensify the emerging arms race.
I have written elsewhere about the dangers of the Russian “catch up and overtake” mentality prevalent during the Cold War nuclear arms race (Zatsepina, Reference Zatsepina2025, pp. 12–13). During the Cold War, it enabled and legitimised large-scale Soviet industrial and military build-up along with territorial expansion and vertical nuclear proliferation, with Soviet leaders frequently highlighting the necessity of competing with and necessarily overtaking the United States. As Bendett (Reference Bendett2024) notes, this Cold War legacy persists in current Russian rhetoric, which portrays AI development not only as a matter of technological progress but as a strategic imperative to avoid falling behind adversaries (p. 3). Similarly, Kroenig (Reference Kroenig2024) describes Russia as a revisionist power that places nuclear weapons at the centre of its security doctrine and views limited nuclear use, including first use, as a viable strategy to deter or coerce NATO. In this context, as argued earlier, AI is not a neutral tool but a force multiplier that may be shaped by, and in turn may reinforce, destabilising doctrines. In addition, the “catch up and overtake” mentality reinforces technological insecurity, which in turn fuels escalation. This is the classic security dilemma, where actions taken for defensive reasons, like developing AI to improve command and control, are perceived as an offensive threat by others.
This is consistent with what Buchanan (Reference Buchanan2017) calls the “cybersecurity dilemma”: when states introduce new capabilities for defensive purposes, others perceive them as offensive threats and respond in kind, heightening mutual suspicion (pp. 189–190). In the AI-supported nuclear domain, this dilemma is even more acute. As discussed by Johnson (Reference Johnson2022, pp. 350–352), because AI systems are opaque, fast-moving and difficult to verify or constrain, adversaries may assume the worst about their function (e.g., pre-emptive targeting, decision acceleration) or deployment (e.g., integrated into NC3 systems). The recursive nature of these security dynamics, combined with the Clausewitzian notion of the “fog of war,” and offensively oriented military doctrine, can increase the risk of crises and act as a catalyst for inadvertent escalation (Johnson, Reference Johnson2022, p. 343). In this case, the Russian case underscores the central argument of this article: the divergent deterrence logics, when combined with competitive technological development, amplify misperception risks.
Despite public commitments to human control, concerns remain over how AI is being integrated into nuclear decision-making processes. In May 2024, U.S. State Department arms control official Paul Dean stated that the United States would never defer a decision to use nuclear weapons to AI and urged Russia and China to make similar proclamations (Hart, Reference Hart2024). Although there appears to be a consensus in Russian military and political circles that humans should retain full control, debates persist regarding the possibility of automating components of early warning and NC3 systems (Shakirov, Reference Shakirov2023, p. 30). Such proclamations may appear reassuring, yet they remain insufficient. The insistence on human oversight does not eliminate the risks posed by AI’s influence in the nuclear domain. According to Saltini (Reference Saltini2023), while all five nuclear-armed P5 states stress the importance of human control, they differ significantly in how they operationalise AI’s permissible role in early warning, ISR and decision support functions (pp. 20–24). In addition, as noted previously, AI can compress decision timelines, accelerate crisis dynamics and inject cognitive biases masked as rational optimisation (Hoffman & Kim, Reference Hoffman and Kim2023, pp. 16–21). This undermines the conditions for deliberate, context-sensitive judgement.
In high-stakes scenarios, reliance on AI to interpret complex data or generate recommendations could inadvertently escalate conflicts, especially if adversaries misinterpret the intent behind actions informed by AI (Wong et al., Reference Wong, Yurchak, Button, Frank, Laird, Osoba, Steeb, Harris and Bae2020, p. 60). Andersen (Reference Andersen2023, p. 15) similarly warns that AI with no nuclear-weapons authority could still “pursue a gambit that inadvertently escalates a conflict so far and so fast that a panicked nuclear launch follows.”
This emerging arms race is further complicated by the erosion of arms control agreements – one of the defining characteristics of the Third Nuclear Age. With Russia’s suspension of compliance with the New START Treaty and no successor treaty in sight, we are entering a period without any formal constraints on nuclear arsenals among the major powers for the first time since the early 1970s (Kroenig, Reference Kroenig2024). Meanwhile, China’s rapid nuclear expansion and growing interest in AI-enabled capabilities introduce a third major actor into the strategic equation, making arms racing more multipolar and less predictable (Johnson, Reference Johnson2022, pp. 358–359).
Even if AI is not granted launch authority, its use in nuclear decision making may introduce escalatory risks. For instance, AI systems trained on incomplete or biased historical data may recommend modernisation or expansion of nuclear arsenals by misinterpreting restraint or treaty compliance as strategic weakness (Black et al., Reference Black, Eken, Parakilas, Dee, Ellis, Suman-Chauhan, Bain, Fine, Aquilino, Lebret and Palicka2024, p. 49). This is especially problematic when AI absorbs Cold War–era strategic assumptions, such as the primacy of strategic parity or the logic of pre-emption, as part of its analytical baseline. In high-stakes scenarios, such systems may generate recommendations that exacerbate escalation pressures.
The traditional deterrence logic hinges on the assumption that decisions are made by rational human actors capable of calculating risks and weighing consequences. Waltz’s “more will be better” argument posits that the spread of nuclear capabilities among rational actors induces caution and restraint. But if AI systems trained on divergent doctrinal frameworks shape threat perceptions and operational decisions, then deterrence logic itself may become unstable. Under these conditions, more AI does not equal more stability. Instead, it risks accelerating us into a more dangerous iteration of the arms race. In this evolving context, deterrence can no longer be assumed as a rational or universal framework.
5. Conclusion
AI introduces unprecedented speed, opacity and bias into strategic decision making, which complicates the traditional assumptions that underpin nuclear deterrence. As AI becomes integrated into nuclear decision-making processes, the “easy calculations” Waltz once identified – those presumed to guide rational actors away from nuclear use – are increasingly muddled by automated processes that may obscure human judgement and reduce the space for reflection. In an environment already marked by doctrinal asymmetries and mutual mistrust, AI risks becoming not a stabilising force, but an accelerant of instability.
This article has focused on the overlooked but urgent problem of inadvertent escalation stemming from conceptual divergence in deterrence thinking, specifically, how Russian approaches to strategic deterrence, grounded in distinct linguistic, doctrinal and cognitive traditions, may interact problematically with AI-supported systems. In doing so, it contributes to the broader challenge outlined by Erskine and Miller (Reference Erskine and Miller2024): understanding how AI is reshaping resort-to-force decision making in an era of uncertainty. These systems may misinterpret adversary actions, reinforce escalation biases and generate false confidence in AI-derived recommendations, particularly in time-compressed or ambiguous crisis settings. The risk is not merely technical failure, but deep misperception, compounded by the speed and “black box” logic of AI.
This challenge is contributing to emerging arms race dynamics, as major powers invest in AI capabilities to avoid strategic disadvantage. While states may claim defensive intent, their AI developments may be interpreted by rivals through pre-existing strategic assumptions and suspicions. As this article has shown, divergent deterrence logics can complicate these interpretations, heightening the potential for mistrust and misperception. In the Russian case, evolving doctrine, AI rhetoric and an emphasis on information confrontation illustrate how military-technological advances are filtered through specific strategic worldviews, reinforcing the need for greater attention to how conceptual differences shape the risks associated with AI integration.
In light of these risks, there is a pressing need for norm-building, transparency and confidence-building measures, and rigorous safeguards to ensure stability in an AI-enhanced nuclear order. Establishing clear limits on AI’s role in nuclear decision making, promoting dialogue on the implications of AI in nuclear strategy and reaffirming the primacy of human judgement are essential steps to prevent the escalation of an AI-driven arms race and the risk of an accidental use of nuclear weapons (Boulanin et al., Reference Boulanin, Saalman, Topychkanov, Su and Peldan Carlsson2020, pp. 140–143). Encouragingly, the November 2024 agreement between Presidents Joe Biden and Xi Jinping, which affirmed that only humans, and not AI, should control nuclear weapons, offers a rare and necessary moment of alignment (Reuters, 2024). While such progress is unlikely with Russia in the near term, especially amid its ongoing invasion of Ukraine, such bilateral initiatives may offer more immediate and pragmatic avenues for setting red lines and establishing AI-related nuclear norms between rivals. Continuous monitoring and adaptation of these AI systems are crucial to address emerging threats as well as conceptual, geopolitical and technological shifts, ensuring they evolve alongside the changing strategic environment.
Furthermore, as Geist (Reference Geist2016) argues, “major military powers will have to strike difficult compromises to forgo some of the warfighting potential of artificial intelligence in exchange for mutual security” (p. 320). This may include prohibiting fully autonomous systems from influencing nuclear launch decisions and developing verification regimes analogous to arms control for AI-enabled capabilities. Without such compromises, the rationality at the heart of deterrence may erode and leave us with a more volatile, mistrustful and unpredictable global nuclear landscape.
In this unfolding Third Nuclear Age, where AI collides with divergent deterrence logics, stability can no longer be taken for granted. These technologies are not neutral tools but amplifiers of strategic assumptions, and when those assumptions are incompatible, the risks of misperception and escalation grow. Without urgent attention to these dynamics, we risk waltzing into uncertainty, as AI reshapes the foundations of nuclear deterrence.
Competing interests
The author declares none.
Dr Luba Zatsepina is a Senior Lecturer in International Relations and Politics at Liverpool John Moores University. She primarily teaches courses on International Relations theory, military history, global security, and Strategic Studies. Previously, she held a lecturing position at the University of Edinburgh and a research position with the Proliferation and Nuclear Policy team at the Royal United Services Institute (RUSI). Luba’s research interests centre on nuclear politics, particularly in the UK and the Soviet Union/Russia. Her work follows two main strands. The first examines Soviet nuclear weapons policy during the Cold War, with an emphasis on the discursive construction of nuclear identity. The second explores the role of Artificial Intelligence (AI) in nuclear command and control and its implications for deterrence theory.