Inadvertent escalation in the age of intelligence machines: A new model for nuclear risk in the digital age

Abstract Will AI-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how Artificial Intelligence (AI) technology (especially AI augmentation of advanced conventional weapons) through various mechanisms and pathways could affect inadvertent escalation risk between nuclear-armed adversaries during a conventional crisis or conflict. How might AI be incorporated into nuclear and conventional operations in ways that affect escalation risk? It unpacks the psychological and cognitive features of escalation theorising (the security dilemma, the ‘fog of war’, and military doctrine and strategy) to examine whether and how the characteristics of AI technology, against the backdrop of a broader political-societal dynamic of the digital information ecosystem, might increase inadvertent escalation risk. Are existing notions of inadvertent escalation still relevant in the digital age? The article speaks to the broader scholarship in International Relations – notably ‘bargaining theories of war’ – that argues that the impact of technology on the cause of war occurs through its political effects, rather than tactical or operational battlefield alterations. In this way, it addresses a gap in the literature about the strategic and theoretical implications of the AI-nuclear dilemma.


Introduction
How might AI-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how artificial intelligence (AI) technology 1 (especially AI augmentation of advanced conventional counterforce weapons) through different mechanisms and pathways might influence inadvertent escalation risk between nucleararmed adversaries during a conventional crisis or conflict. Are existing notions of inadvertent escalation still relevant in the digital age? We are now in an era of rapid disruptive technological change, especially in AI technology. 2 AI technology is already being infused into military machines, and global armed forces are well advanced in their planning, research and development, and in some cases, deployment of AI-enabled capabilities. 3 Therefore, the embryonic journey to reorient military forces to prepare for the future digitised battlefield is no longer merely speculation or science fiction.
While much of the recent discussion has focused on specific technical issues and uncertainties involved as militaries developed and diffuse AI applications at the tactical and operational level of war, the strategic and theoretical treatment of these developments (or the 'AI-nuclear strategic nexus') has received far less attention. 4 This article addresses this gap. It examines the psychological and cognitive features of escalation theorising to consider whether and how AI technology 'characteristics', 5 contextualised with the broader political dynamics associated with today's digital information ecosystem, may increase inadvertent escalation risk. It explains how AI technology could be incorporated into nuclear and conventional operations in ways that affect inadvertent escalation risks during a crisis or subnuclear conflict in strategically competitive dyads -US-China, India-Pakistan, and US-Russia. How might AI be incorporated into nuclear and conventional operations in ways that affect escalation risk?
The article speaks to the broader scholarship in International Relationsnotably 'bargaining theories of war' (shifts in the balance of power, uncertainty, asymmetric information, and commitment problems), 6 deterrence theorising, 7 and political psychology 8that argues that the impact of technology on the cause of war occurs through its political and psychological effects, rather than tactical or operational battlefield shifts caused by technological innovation. Specifically, the political consequences that flow from changes to the balance of power and its impact on the redistribution of resources and perceived (by beneficiaries and their rivals) strategic advantage of utilising a particular capability. 9 A new asymmetric capability, doctrine, or strategy that decreases (or increases) the perceived cost, risk, and lethality of warfare, ceteris paribus, should affect the observable mechanisms of conflict such as escalation, only to the extent that it changes actors' perceptions about how adversaries might perform in battle. Because of the centrality of information (that is, capabilities, interests, and intentions), the critical factor is to what degree a particular technology disproportionality affects states' perception of the balance of power. 10 The remainder of this article is organised as follows. First, the article offers a conceptual overview of escalation theorising. It defines the various terms, analogies, mechanisms, and metaphors associated with escalation, which describes at its core is a fundamentally psychological and perceptual one. Second, it applies Barry Posen's inadvertent escalation analytical framework to examine the effects of AI technology on the causes of inadvertent escalationthat is, the 'security dilemma', the Clausewitzian notion of the 'fog of war', and offence military strategy and doctrine. 11 This section revisits this model to conceptualise the psychological underpinnings of the novel ways AI technology and the emerging digital information ecosystem may increase inadvertent escalation risk. Next, the article considers how state or non-state actors might use AI technology to conduct mis/disinformation (that is, information that is inaccurate or misleading) asymmetric operations in ways that might increase inadvertent risk. 12 Finally, the article highlights the critical features of inadvertent escalation risk in the emerging AI-nuclear strategic nexus, concludes with policy implications of the AI-nuclear strategic nexus, and suggests possible ways to mitigate inadvertent escalation risk and improve strategic stability.

Conceptualising inadvertent escalation: Escalation ladders, dominance, and other metaphors
The concept of escalation is at its core a fundamentally psychological and perceptual one. Like other related concepts such as deterrence and strategic stability, escalation relies upon the actor's unique understanding of context, motives, and intentionsespecially in the use of capabilities. 13 How actors resolve these complex psychological variables associated with the cause, means, and effects of a military attack (both kinetic and non-kinetic) remains a perplexing and invariably elusive endeavour. 14 Furthermore, deterring escalation is generally achieved as a result of the fear and uncertainty (or 'fear of eruption') of how an adversary might assess capabilities, threats, and respond (or overreact) to a situation, rather than the perceived costs or military advantages of escalation, per se. 15 How might uncertainty about digital vulnerabilities affect inadvertent escalation dynamics?
Escalation theorising came into prominence during the Cold War era with the development of nuclear weapons, particularly the need to conceptualise and control conflict below the level of all-out total war. Nuclear theories of escalation continue to provide the theoretical basis for escalatory strategies and undergird debates about nuclear deterrence, 16 strategic planning, and how a conventional skirmish could become a nuclear war. On escalation, Herman Kahn's seminal work conceptualises a 44-rung escalation ladder metaphor, which moves from low-scale violence to localised nuclear war (or counter value warfare) to conventional and nuclear attacks against civilian populations (or strategic counter value warfare). 17 Kahn's 'escalation ladder' metaphor hinges on the idea of psychological obstacles, thresholds, or stages of the escalation process that would impose a threshold (or firebreak) to the next rung or step up the ladder in ascending order of intensity. 18 Escalation theory's emphasis on the importance of firebreaks between the rungs underscored the qualitative, psychological (both rational and irrational), and normative difference (or 'normative stigma' and taboo) between nuclear and non-nuclear domains. 19 A seminal study by the RAND Corporation defined escalation as 'an increase in the intensity or scope of conflict that crosses a threshold(s) considered significant by one or more of the participants'. 20 An 'unintentional' increased intensity or scope of a situation can be inadvertent, catalytic, or accidentalencompassing incorrect or unauthorised usage (see Figure 1). 21 Intentional escalators knowingly take actions that cross thresholds (or firebreaks) for strategic gain, to send a signal of intent, obtain information about an adversary (that is, resolve, credibility commitment, or risk acceptance), or avert military defeat (that is, through pre-emption, a 'bolt-from-the-blue' attack, or grey zone tactics). 22 In contrast, inadvertent escalation occurs when an actor crosses a threshold that it considers benign, but the other side considers significant. 23  Escalation, in a broader sense, can be both violent (kinetic) and non-violent (non-kinetic) in nature and occur during a military exchange ('vertical escalation') or represent an expansion in the scope and range of a conflict ('horizontal escalation'). In contrast, 'political escalation' refers to non-military changes in the scope or intensity of a situationthat is, rhetorical, an articulation of expansive objectives, or changes to the accepted rules of engagement). See Morgan et al.,Dangerous Thresholds,ch. 2. 22 For discussion on why a state might contemplate intentional escalation, see Kelly M. Greenhill and Peter Krause (eds), Coercion: The Power to Hurt in International Politics (New York, NY: Oxford University Press, 2018), pp. 11-12. 23 Posen, Inadvertent Escalation.
for example, send a signal to an adversary that it does not intend to cross a threshold but is perceived to do so by the other. 24 These distinctions are not, however, binary, or mutually exclusive. An escalation mechanism that leads from a crisis or conflict to its outcome can involve more than one of these categories. For example, a 'false flag cyber-operations' 25 by a third-party actor targeting a state's nuclear command, control, and communication (NC3) systems accidentally sets in train escalatory mechanisms because the victim perceives the attack as a precursor to a pre-emptive strike by an adversary. 26 In this example, the actor accidentally breaches another's psychological (real or illusory) threshold, 27 triggering counter-escalation dynamics that could be in response to misinformation, fear, or misperceptionalso known as 'catalytic escalation'. 28 Moreover, within the broader digital information ecosystem associated with the 'Third Nuclear Age', 29 the deliberate  For example, amid heightened tensions in the Baltics, NATO's actions to bolster its deterrence posture in Eastern Europe might be perceived by Moscow as preparation for a pre-emptive military offensive, thus risking inadvertent counterescalation by Russia. Ulrich Kühn, Preventing Escalation in the Baltics: A NATO Playbook (New York, NY: Carnegie Endowment for International Peace, 2018). 25 A false flag cyber-operation is designed to deflect attribution to a neutral party, and the actor behind the attack took steps to impersonate or use the distinctive infrastructure, tactics, techniques, or procedures to appear as if it had been the work of another. For example, the Olympic Destroyer cyberattack against the 2018 PyeongChang Winter Olympic Games is regarded as having been a false flag operation in which Russia's GRU designed its attack to appear as if it had been the work of North Korea. Andy Greenberg, 'The untold story of the 2018 Olympics cyberattack, the most deceptive hack in history', Wired (17 October 2019), available at: {https://www.wired.com/story/untold-story-2018-olympics-destroyer-cyberattack/}. 26 Herbert Lin, 'Escalation dynamics and conflict termination in cyberspace', Strategic Studies Quarterly, 6:3 (2012), pp. 46-70. 27 However, not all threats in the use of force are escalatory. Escalation occurs only when at least one actor views this action (that is, rhetoric or signalling) as shifting the scope or intensity of a situation. See Michael Brecher, 'Crisise escalation: Model and findings', International Political Science, 17:2 (1996), pp. 215-30. 28 James Johnson, '"Catalytic nuclear war" in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states', Journal of Strategic Studies (2021), available at: {https://doi:10.1080/ 01402390.2020.1867541}. 29 The concept of 'nuclear ages' is a contested one. Broadly speaking, the first nuclear age refers to the years between 1945 and the end of the Cold War, while a second between 1989-91 to the present. On the 'Third Nuclear Age', see Rebecca use of nuclear weapons that originates from a false, manipulated, or distorted assessment of a situation (for example, in response to an early warning system false alarm), can quickly muddy intentionality lines. 30 In short, the binary distinction between deliberate and inadvertent use of nuclear weapons is inherently problematic. 31 Escalation can, therefore, be a strategic bargaining tool (that is, for deterrence and coercion) and a risk to be controlled and potentially mitigated. 32 Thus, actions that are interpreted as escalatory by almost all actors (for example, the use of nuclear weapons to respond to a low-level conventional conflict) while others are considerably less clear-cutfor instance, a cyber espionage operation against a states' dual-use command and control systems. 33 Consequently, escalation situations typically involve 'competition in risk-taking' and resolve. 34 Either side can intensity a situation providing the other side does not match that escalation in kindif this escalation was not matched and victory achieved, the 'cost of the increased effort would be below in relation to the benefits of victory'. 35 As Bernard Brodie noted in 1965, 'since the beginning of the nuclear era there has been in the minds of men a strong tendency to distinguish between nuclear and non-nuclear weapons combined with a fear of and aversion to the former', and this distinction has tended 'to grow stronger with time rather than weaken'. 36 To be sure, recent scholarship questioning the validity of the public's taboo on the use of nuclear weapons (especially as tools to fight terrorism and nuclear proliferation), and policy debates about the use of tactical nuclear weaponswhich surrounded the Trump administration's Nuclear Posture Reviewdemonstrates the prophetic nature of Brodie's 'firebreak theory'. 37 Besides, the 'fear' and 'aversion' associated with nuclear weapons have been used in studies to explain the role of emotion and cognition play in the misperceptions about weapons' effectiveness. Misperceptions can create an 'emotional springboard' for the process of reasoning that can influence nuclear non-use, deterrence, and nuclear taboos, which in turn informs policymaking. 38 The escalation ladder metaphor, like any theoretical framework, has limitations. Actors would not necessarily, however, move sequentially and inexorably from the lower rungs ('subcrisis manoeuvring') to the higher rungs ('spasm or insensate war')that is, rungs can be skipped and go up as well as down. Instead, there are many pathways and mechanismsfor instance, conventional vs nuclear, tactical vs strategic, or counterforce vs counter valuebetween low-intensity Deterrence and coercion require the actor that is deterring to share information about the military balance with the actor that is being deterred. Alternatively, an actor may employ nuclear ambiguity to achieve the same deterrence effectfor example, the clandestine development of Russia's Perimitr (or 'Dead Hand' system). See Early and Gartzke, 'Spying from space', p. 5; and David E. Hoffman, The Dead Hand: The Untold Story of the Cold War Arms Race and Its Dangerous Legacy (New York, NY: Anchor Books, 2009). 33 Technological advances in AI technology and cyber capabilities, coupled with the increasingly commingled nature of the state's nuclear and conventional command and control systems, have enabled solutions to overcome the robustness of permissive action links and increase these vulnerabilities systems. Bruce Blair, The Logic of Accidental Nuclear War (Washington, DC: Brookings Institute, 1993 and all-out nuclear confrontation. Besides, states can be at different rungs or thresholds along the 'relatively continuous' pathways to war. 39 Despite its limitations, however, Kahn's 'escalation ladder' is a useful metaphorical framework to reflect on the possible available options (for example, a show of force, reciprocal reprisals, costly signalling, and pre-emptive attacks), progression of escalation intensity, and scenarios in a competitive nuclear-armed dyad. This article argues that the introduction of AI is creating new ways (or 'rungs') and potential shortcuts up (and down) the ladder, which might create new mechanisms for a state to perceive (or misperceive) others to be on a different rung, thus making some 'rungs' more (or less) fluid or malleable.
Strategic rivals require a balanced understanding of how the other views the escalation hierarchy. During a crisis or conflict, continuous feedback about an adversary's intentions, where it views itself on the escalation ladder, and how shifts in the scope or intensity of a situation (that is, kinetic, non-kinetic, or rhetorical) may be perceived. 40 Because of the inherently subjective nature of escalation, actions perceived as escalatory by one state can be misunderstood as thus by others. 41 What characteristics of AI technology may create new rungs on the escalation ladder that increase the inadvertent risk of escalating a conventional conflict to nuclear war?
An important feature of escalation theory is the notion of 'escalation dominance': when a nuclear power force an adversary to back down because the cost and risk associated with further escalation outweigh the perceived benefits. 42 The state that possesses a position of escalation dominance, ceteris paribus, has unique tactical advantages on a particular rung of the escalation ladder. Moreover, a state that has most to lose in a situation escalating, or fears escalation the least (that is, it is not the one doing the escalation), will axiomatically achieve an element of escalation dominance in a situation. 43 Escalation dominance is thus a function of several variables: (1) where two states are positioned on the escalation ladder; (2) an assessment of their respective capabilities (both offensive and defensive) on a particular rung; (3) each sides' perception of the probability and likely outcome of moving to a different rung; and (4) the perceived ability of one side of the other to effectuate this shift. 44 Furthermore, both sides' fear and aversion of intensifying a situation is also an important psychological variable of escalation dominance.
A new model of inadvertent escalation in the digital age In his seminal work on inadvertent escalation, Barry Posen identifies the major causes of inadvertent escalation as the 'security dilemma', the Clausewitzian notion of the 'fog of war', and offensively orientated military strategy and doctrine. 45 This section applies Posen's framework to examine the effects of AI technology on the causes of inadvertent escalation. In the light of recent technological change, this section revisits Posen's analytical framework to examine the psychological underpinnings of security dilemma theorising (misperception, cognitive bias, and heuristics) to consider the how and why the novel characteristics of AI technology and the emerging digital information ecosystem may destabilise 'crisis stability' and increase inadvertent escalation risk. 46 39 Ibid., p. 38. 40 Brecher, 'Crisis escalation ', pp. 215-30. 41 Greenhill and Krause (eds), Coercion, pp. 3-33. 42 On the core logic of escalation dominance, see Kahn, On Escalation; Kahn, Thinking About the Unthinkable in the 1980s (New York, NY: Touchstone, 1985); and Matthew Kroenig, 'Nuclear superiority and the balance of resolve: Explaining nuclear crisis outcomes', International Organization, 67:1 (2013), pp. 141-71. 43 Ibid., p. 290. 44 Ibid. 45 Posen, Inadvertent Escalation, pp. 12-27. 46 'Crisis stability' refers to the presumption that control can be maintained during crisis or conflict to ensure that nuclear weapons are not usedthat is, a situation where neither side has an incentive to strike first or pre-emptively. Cold War-era debates on the concept centred on how specific capabilities, postures, and military doctrine could escalate (either intentionally or inadvertently) to nuclear crisis or war. See Thomas Schelling, Arms and Influence (New Haven, CT and London, UK: The security dilemma concept According to 'security dilemma' 47 theorising, the relative ease of carrying out and defending against military attacks (the 'offence-defence balance'), and the ambiguity about whether a weapon is offensive or defensive (the 'offence-defence distinguishability'), can cause crisis instability and deterrence failure because these characteristics create fear and uncertainty about an adversary's intentions. That is, where they harbour benign (that is, non-aggressive/defensive) or malign (that is, aggressive/offensive) intent towards the other side. 48 In his seminal paper on the topic, Robert Jervis defines the security dilemma as the 'unintended and undesired consequences of actions meant to be defensive … many of how a state tries to increase its security decrease the security of others'that is, one state's gain in security can inadvertently undermine the security of others. 49 Actors tend to perceive the accumulation of offensive capabilities (and attendant offensive strategies) by others as threatening, assuming the worst respond with countermeasuresfor example, arms racing, crisis mobilisation, raising the alert status, and pre-emptive war. 50 According to Jervis, it is 'the fear of being exploited' that 'most strongly drives the security dilemma'. 51 As we have noted, the fear (both rational and irrational) of conventional skirmishes crossing the nuclear Rubicon can be found in nuclear policymakers' statements as far back as the 1950s. 52 Therefore, security dilemma logic can be used to consider both peacetime spirals of political mistrust and shifts in the military balance, crisis stability dynamics, and escalation mechanisms once military conflict begins. The security dilemma concept is an inherently inadvertent phenomenon; that is, weapons indistinguishability (that is, offence vs defence) allows states to inadvertently (for rationale political or military reasons) threaten others, which can spark spirals of mutual distrust, strategic competition, and military action. 53 There are several ways in which the security dilemma can act as a catalyst for inadvertent escalation that is likely to be compounded in the digital age. 54 First, while nuclear-armed states have a shared interest in avoiding nuclear war, they also place a high value on their nuclear forcesboth vital security assets and as symbols of national prestige and status. 55 As a corollary, conventional weaponsdevised by a nuclear power for defensive purposes or counterforce missionsmay nonetheless be viewed by an adversary as an offensive threat to their nuclear survivability (discussed below). 56  state (especially in situations of military asymmetry) may be misperceived as unprovoked malign intent and not as a response to the initiator's behaviour prompting action and reaction spirals of escalation. Third, and related, the state of heightened tension and the compressed decisionmaking pressures during a conventional conflict would radically increase the speed by which action and reaction spirals of escalation unravel. In the digital age, these dynamics would be further compounded, which would reduce the options and time for de-escalation and increase the risks of horizontal (the scope of war) and vertical (the intensity of war) inadvertent escalation. 57 Security dilemma theorising also ties in with the concept of the 'capability/vulnerability paradox' in International Relations. 58 That is, one state's pursuit of a new resource to compete against other states introduces a vulnerability that is perceived as threatening. 59 This paradox suggests that when a state's military capabilities are dependent on a particular resource (that is, that may be exploited or dominated by an adversary), both sides have incentives for pre-emptive first strikesthe resource vulnerability and not the new capability, per se generate these destabilising outcomes. 60 Much like the security dilemma, therefore, the potential impact of the 'capability/vulnerability paradox' has increased in the context of technological advancements associated with the 'information revolution' in military affairs (that is, the centralisation of information and dependencies on digital information to conduct modern warfare), which now includes artificial intelligence. 61 As Pentagon insider Richard Danzig notes, 'digital technologies … are a security paradox: even as they grant unprecedented powers, they also make users less secure'. 62 In this sense, as a cause of inadvertent escalation, the security dilemma connects to the broader strategic and security literature on misperception, emotions, and cognition in various studies evince a strong qualitative connection between human psychology and war. 63 How might the confusion and uncertainty of war increase inadvertent risk? 57 It is also plausible that improvements in battlefield awareness and decision-making AI-enabled tool afford commanders more time to make decision during combat. See Paul Scharre, 'Autonomous Weapons and Stability' (PhD Thesis, King's College London, 2020), available at: {https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.806777}; and James Johnson, 'Artificial Intelligence in nuclear warfare: A perfect storm of instability?', The Washington Quarterly, 43:2 (2020), pp. 197-211. 58 Jacquelyn Schneider, "The capability/vulnerability paradox and military revolutions: Implications for computing, cyber, and the onset of war', Journal of Strategic Studies, 42:6 (2019), pp. 841-63. 59 The 'capability/vulnerability paradox' in International Relations may also shift when the resource being pursued or threatened is less 'material' or its strategic value changes (for example, fossil fuels). 60 As a helpful counterpoint, while the proliferation and dependency on the digital information ecosystem increase, the ubiquity of information and the intrinsic inability to control all information might create disincentives for first strike. Clausewitzian 'fog of war' Inadvertent escalation risk can also be caused by the confusion and uncertainty (or 'fog of war') associated with gathering, analysing, and disseminating relevant information about a crisis or conflictwhich has important implications for the management, control, and termination of war. 64 Traditional nuclear deterrence, crisis stability, and strategic stability theorising work off the questionable premise that actors are rational and presume that they are therefore 'able to understand their environment and coordinate policy instruments in ways that cut through the "fog of war"'. 65 In reality, misperception, miscalculation, accidents, and failures of communication 'events can readily escape control', and although these escalation dynamics are often unintended, potentially catastrophic events can result. 66 With the advent of the speed, uncertainty, complexity, and cognitive strains (see below) associated with conducting military operations on the digitised battlefield, the prospects that decision-making by 'dead reckoning'dependent on the experience and sound judgement of defence-planners when information is scarce and autonomous bias proclivities loom largecan prevent command and control failures caused by the 'fog' seems fanciful. 67 The confusion and uncertainty associated with the 'fog of war' can increase inadvertent risk in three ways: (1) it can complicate the task of managing and controlling military campaigns at a tactical level (or situational battlefield awareness); (2) it can further compound the problem of offence-defence distinguishability; and (3) it can increase the fear of a surprise or pre-emptive attack (or a 'bolt from the blue'). 68 Taken together, these mechanisms can result in unintentional (and possibly irrevocable) outcomes and thus obfuscate the meaning and the intended goals of an adversary's military actions. 69 In short, the dense 'fog of war' increases the risk of inadvertent escalation because 'misperceptions, misunderstandings, poor communications, and unauthorized or unrestrained offensive operations could reduce the ability of civilian authorities to influence the course of war'. 70 While Cold War decision-makers also faced extreme domestic and foreign pressures to decipher the adversary's intentions correctly, these pressures are being amplified in the current digitised information ecosystemreducing the timeframe for decision-makers during a crisis, increasing the cognitive and heuristic burdens caused by information overload and complexity. 71 64 The strategic value of 'information' has long been recognised in military affairs: (1) as a force multiplier to alter the balance of power during war and peace; and (2) providing knowledge about an adversary's capabilities that can make political leaders more effective at recognising acceptable bargains and channels for cooperation, thus ensuring peace. Early and Gartzke, 'Spying from space', p. 5. Cognitive pressures caused by the sheer volume of information flow (both classified and open sources) are being compounded by the degraded reliably and politicisation (or 'weaponisation') of information. These pressures can, in turn, make decision-makers more susceptible to cognitive bias, misperceptions, and heuristics (or mental shortcuts) to approach complex problem-solvingeither directly or indirectly informing decision-making. 72 While disinformation and psychological operations in deception and subversion are not a new phenomenon, 73 the introduction of new AI-enhanced tools in the emerging digital information ecosystem enables a broader range of actors (state and non-state) with asymmetric techniques to manipulate, confuse, and deceive. 74 Disinformation operations might erode the credibility of, and undermine public confidence in, a states' retaliatory capabilities by targeting specific systems (that is, command and control) or personnel (primarily via social media) who perform critical functions in maintaining these capabilities. 75 For example, cyberweaponsnotably 'left of launch' operationshave been allegedly used by the United States to undermine Iranian and North Korean confidence in their nuclear forces and technological preparedness. 76 The potential utility of social media to amplify the effects of a disinformation campaign was demonstrated during the Ukrainian Crisis in 2016 when military personnel's cell phones were the victims of Russian information operations. 77 However, the use of these novel techniques during a nuclear crisis and how they might impact the 'fog of war' is less understood and empirically untested. 78 During a nuclear crisis, a state might attempt, for instance, to influence and shape the domestic debate of an adversary (for example, shift preferences, exacerbate domestic-political polarisation, or coerce/co-opt groups and individuals on social media) in order to improve its bargaining hand by delegitimising (or legitimising) the use of nuclear weapons during an escalating situation, or bring pressure to bear on an adversary's leadership to sue for peace or de-escalate a situationa tactic which may, of course, dangerously backfire. 79 Moreover, a third-party (state or non-state) actor to achieve its nefarious goals could employ active information techniques (for example, spreading false information of a nuclear detonation, troop movement, or missiles leaving their garrison) during a crisis between nuclear rivals (for Russia, for example, reportedly conducted an active information campaign against several NATO states to influence the public discourse and policymakers to destabilise NATO's missile defence operations in Europe. See Hege Eilersten, 'Russia's Ambassador warns: Missile shield will endanger Norway's borders', High North News (22 February 2017), available at: {https://www.highnorthnews.com/en/russias-ambassador-warns-missile-shield-will-endanger-norways-borders}. example, India-Pakistan, US-China, Russia-NATO) to incite crisis instability. 80 Public pressures on decision-makers from these crisis dynamics, and evolving at a pace that may outpace events on the ground, might impel (especially thinned-skinned and inexperienced) leaders, operating under the shadow of the deluge of 24-hour social media feedback and public scrutiny, to take actions that they might not otherwise have. 81 In sum, the emerging AI-enhanced digitised information environment, though not fundamentally altering the premise upon which actors (de-)escalate a situation, imperfect information, uncertainty, and risk associated with the 'fog of war', nonetheless, introduces novel tools of misinformation and disinformation and an abundance of information radically alters the cognitive pressures placed on decision-makers during crisis and conflict. Besides, decision-makers subjection to an abundance of disparate and often unverified information will indubitably influence actors' (collectively or individuals) policy preferences and perceptions. As a result, complicating the security dilemma challenge of understanding an adversary's capabilities, intentions, doctrine, and strategic thinking, with potentially profound repercussions for escalation dynamics. How might these pressures affect states' regional conflicts with different military doctrine, objectives, and attitudes to risk?

Offensive military doctrine and strategy
Because of a lack of understanding between (nuclear-armed and non-nuclear-armed) adversaries about where the new tactical possibilities offered by these capabilities figure on the Cold War-era escalation ladder, thus the pursuit of new 'strategic non-nuclear weapons' (for example, cyber weapons, drones, missile defence, precision munitions, counterspace weapons) increases the risk of misperception. 82 The fusion of AI technology into conventional weapon systems (that is, to enhance autonomous weapons, remote sensing for reconnaissance, improving missile guidance and situational awareness) is creating new possibilities for a range of destabilising counterforce options targeting states' nuclear-weapon delivery and support systems; for example, cyber NC3 'kill switch' attacks or tracking adversaries' nuclear-armed submarines and mobile missile launchers. 83 Russia, the United States, and China are currently pursuing a range of dual-capable (conventional and nuclear-capable) delivery systems (for example, hypersonic guide vehicles, stealth bombers, and a range of precision munitions) and advanced conventional weapons (drones, space-based, and cyber weapons. 84 ) that are capable of achieving strategic effectsthat is, without the need to use nuclear weapons. 85  command and control systems (that is, early warning, situational awareness, and surveillance) to manage conventional and nuclear missions. 86 Chinese analysts, for example, while concerned about the vulnerabilities of their command and control systems to cyberattacks, are optimistic about the deployment of AI augmentation (for example, ISR, intelligent munitions, and unmanned aerial vehicles) to track and target an adversary's forces, and will lower cost of (economic and political) signalling and deploying military forces. 87 Advances in AI technology, in conjunction with the technologies it can enable (for example, remote sensing, hypersonic technology, and robotics and autonomy), increase the speed, precision, lethality, and survivability of strategic non-nuclear weapons, thus exacerbating the destabilising effects of these capabilities used by nuclear-armed rivals. Thus, opening new pathways for both horizontal and vertical inadvertent escalation. 88 Moreover, these technological advances have been accompanied by destabilising doctrinal shifts by certain regional nuclear powers (Russia, North Korea, and possibly China), which indicates a countenance of the limited use of nuclear weapons to deter an attack (or 'escalate to de-escalate'), in situations where they face a superior conventional adversary and the risk of large-scale conventional aggression. 89 Furthermore, volatility in nuclear-armed states' civil-military relations can create internal pressures to pursue a more aggressive nuclear force posture or doctrine. 90 The assumption that new (and latent) regional nuclear-states will act in ways that reflect their interests as rational actors to avoid nuclear war, thus enhance deterrence (or 'rational deterrence theory'), 91 and crisis stability, understates the role of influential military organisations (that is, offensive-doctrinal bias, parochialism, and behavioural rigidities) in shaping nuclear doctrine that can lead to deterrence failure and nuclear escalationdespite national security interests to the contrary. 92 For instance, in a nuclear-armed state where the military officers influence the nuclear strategy, the adoption of offensive doctrines may emerge, which reflect volatile civil-military relations rather than strategic realities. 93 In short, technological advancements to support states' nuclear deterrence capabilities will develop in existing military organisations imbued with their norms, cultures, structures, and invariably, mutually exclusive strategic interests. China's 'military-civil fusion' concept States that have volatile civil-military relations are also more vulnerable to accidents involving nuclear operations. Ibid., pp. 98-9.
(a dual-use integration strategy) designed to co-opt or coerce Chinese commercial entities to support the technological development of the People's Liberation Army (PLA) to meet the needs of 'intelligentized warfare in the future'is an important test case in a civilian-led initiative designed to drive military innovation in the pursuit of broader geostrategic objectives. 94 The impact of China's 'military-civil fusion' on civil-military relations remains unclear, however. 95 US counterforce capabilitiesto disarm an enemy without resort to nuclear weaponsused in conjunction with air and missile defences to mop up any residual capabilities after an initial attack will generate crisis instability and 'use it or lose it' pressures. 96 Chinese analysts, for example, have expressed concern that US advances in AI could overwhelm Chinese air defences, thus reducing the time available to commanders to respond to an imminent attackfor example, from the US autonomous AI-enabled Long Range Anti-Ship Missile (AGM-158C) designed to target 'high-priority targets'. 97 Furthermore, the increased optimism in states' ability to use AI-enhancements to find, track, and destroy others' nuclear forces enabled by AI technology (notably when military-capability imbalances exist) will be an inherently destabilising phenomenon. 98 What one side views as conventional operations might be viewed by the other side as a precursor to a disabling counterforce attack (for example, targeting dual-use command and control centres and air defences), thus increasing inadvertent escalation risk. 99 Because of the asymmetry of interest at stake in a regional crisis involving the United States, the stakes will likely favour the defending nuclear-armed power. 100 According to 'prospect theory', a regional power would perceive the relative significance of a potential loss more highly than again. 101 That is, when prospect theory is applied to deterrence dynamics, leaders are inclined to take more risks (that is, risk-acceptant) to protect their positions, status, and reputations, than they are to enhance their position. 102 Thus, having suffered a loss, leaders are generally predisposed to engage in excessive risk-taking behaviour to recover lost territoryor other 94 Suppose the Chinese government wants a technology that a particular commercial entity controls; extra-legal influence or coercion can compel the company to turn it over. In that case, forced technology transfers are not thought to occur rou- position or reputational damage. 103 If, for instance, Chinese or North Korean leaders are faced with the prospect of an imminent attack on Taiwan, or Pyongyang view their regime survival at stake, they would likely countenance greater risks to avoid this potential (existential) loss. 104 Furthermore, these capabilities' crisis instability could also result from irrational behaviour derived from misperception, cognitive biases, or other emotional impulses, which makes nuclear escalation more likely. 105 For example, Chinese analysts tend to overestimate the US's military AI capabilities relative to open-source reportsoften citing outdated or inaccurate projections of US AI 'warfighting' budgets, development, and force posture. 106 The framing of Chinese discussion on US military AI projects is analogous to Soviet concerns about the missile gap with the US during the Cold War; 107 thus, risk compounding Beijing's fear that AI technology could be strategically destabilising. 108 In a world with imperfect (and asymmetric) information about the balance of power and resolve, and incentives to misrepresent and manipulate perceptions (exploit psychological dispositions and vulnerabilities) and emotions (strategic framing and fear appeals) of the information ecosystem, bargaining failure, and war are more likely. 109 Given the confluence of secrecy, complexity, erroneous, or ambiguous intelligence data (especially from open-source intelligence and social media outlets), 110 AI-augmentation will likely exacerbate compressed decision-making and the inherent asymmetric nature of cyberspace information. 111 For example, using AI-enhanced cyber capabilities to degrade or destroy a nuclearstates command and control systemswhether as part of a deliberate coercive counterforce attack or in error as part of a limited conventional strikemay generate pre-emptive 'use it or lose it' 103 For a recent study that finds elite actors' decision to go nuclear would are heavily influenced by how their affect their personal status and reputation, see Pauly B. C. Reid, 'Would U.S. leaders push the button? Wargames and the sources of nuclear restraint', International Security, 43:2 (November 2018), pp. 151-92. situations. 112 In a US-China conflict scenario, for instance, a US penchant for counterforce operations targeting adversaries' command and control, the comingled nature of China's (nuclear and conventional) missile forces, US and Chinese preference for the pre-emptive use of cyberweapons, and domestic-political pressures on both sides to retaliate for costly losses (either physical/kinetic or non-physical/political), increases the dangers of inadvertent escalation. 113 These risks should give defence planners pause for thought using advanced conventional capabilities to project military power in conflicts with regional nuclear powers. In short, conventional doctrines and operational concepts could exacerbate old (for example, third-party interference, civil-military overconfidence, regime type, accidental or unauthorised use, or an overzealous commander with pre-delegation authority) and create new pathways (for example, AI-enhanced ISR and precision missile targeting and guidance, drone swarms, AI-cyberattacks, and mis/disinformation subversion) to uncontrollable inadvertent escalation. Missile defences and advanced conventional weapons are unlikely to prevent these escalatory mechanisms once battlefield perception shifts and the nuclear threshold is crossed. 114 To the extent to which a state may succeed in limiting the damage from a nuclear exchange in using technologically enhanced (including AI) counterforce operations continues to be an issue of considerable debate. 115 However, from the perspective of inadvertent escalation, the efficacy of damage-limitation counterforce tactics is less significant than whether an adversary views them as such. Chinese and Russian fear that the United States is developing and deploying conventional counterforce capabilities -(especially cyberattacks on NC3 systems) to blunt their nuclear deterrence risks generating 'crisis instability' caused by 'use it or lose it' pressuresthat is, pressures to use nuclear weapons before losing the capability to do so. 116 Nuclear powers maintain different attitudes and perceptions on the escalation risk posed by cyberattacks and information warfare more generally, however. Chinese analysts, in particular, have expressed an acute awareness of the potential vulnerabilities of their respective NC3 systems to cyberattacks. 117 The United States has begun to take this threat more seriously, whereas Russian strategists, despite bolstering their cyber defences appear more sanguine and view information warfare as a continuation of peacetime politics by other means. 118 Consequently, the risk of inadvertent escalation caused by misperception and miscalculation will likely increase. 119 112 Posen, Inadvertent Escalation, pp. 13-14. 113 In what ways might the new tools and techniques emerging in digitised information exacerbate these dynamics?
The digitised information ecosystem, human psychology, and inadvertent risk Misperceptions, cognitive bias, and the human psychological features of security dilemma theorising can also be used to elucidate the escalatory dynamics that can follow from inflammatory, emotionally charged, and other offensive public rhetoric (see, for example, fake news, disinformation, rumours, and propaganda) used by adversaries during crisisor saber-rattling behaviour. 120 During, in anticipation of, or to incite a crisis or conflict, a state or non-state actor (for example, clandestine digital 'sleeper cells') could employ subconventional (or 'grey zone') information warfare campaigns to amplify its impact by sowing division, erode public confidence, and delaying an effective official response. 121 The public confusion and disorder that followed a mistaken cell phone alert warning residents in Hawaii of an imminent ballistic missile threat in 2018 serve as a worrying sign of the vulnerabilities of US civil defences against state or non-state actors' seeking asymmetric advantages vis-à-vis a superior adversarythat is, compensating for its limited nuclear capabilities. 122 North Korea, for example, might conceivably replicate incidents like the Hawaii false alarm in 2018 in a disinformation campaign (that is, issuing false evacuation orders, issuing false nuclear alerts, and subverting real ones via social media) to cause mass confusion. 123 During a crisis in the South China Seas or South Asia, for example, when tensions are running high, state or non-state disinformation campaigns could have an outsized impact on influencing crisis stability (dependent on the interpretation and processing of reliable intelligence) with potentially severe escalatory consequences. This impact would be compounded when populist decision-makers heavily rely on social media for information-gathering and open-source intelligence and thus more susceptible to social media manipulation. 124 In extremis, a populist leader may come to view social media as an accurate barometer of public sentiment, eschewing official (classified and non-classified) evidence-based intelligence sources, and regardless of the origins of this virtual voicethat is, from genuine users or fake accounts as part of a malevolent disinformation campaign. Consequently, the agenda-setting framing narrative of decision-makers during a crisis would instead be informed by a fragmented and politicised social media information ecosystem; amplifying rumours, conspiracy theories, and radical polarisation, which in turn, reduces the possibility of achieving a public consensus to inform and legitimatise decisions during a crisis. Such dynamics may also expose decision-makers to increased 'rhetorical entrapment' pressure whereby alternative policy options (viable or otherwise) may be overlooked or dismissed out of hand. 125 Furthermore, increased public scrutiny levelsespecially coupled with disinformation and public paniccould further increase political pressures on leaders whose electoral success determines their political survival. 126 Under crisis conditions, these dynamics may compromise diplomatic de-escalation efforts and complicate other issues that can influence crisis stability, 120 See Robert Jervis, Perception and Misperception in International Politics (Princeton, NJ: Princeton University Press, 1976). 121 Hersman, 'Wormhole escalation ', pp. 91-109. 122 Tim Starks, 'Hawaii missile alert highlights hacking threat to emergency services', Politico (16 January 2018), available at: {https://www.politico.com/newsletters/morning-cybersecurity/2018/01/16/hawaii-missile-alert-highlights-hacking-threatto-emergency-systems-074411}. 123 including maintaining a credible deterrence and public confidence in a state's retaliatory capability and effective signalling resolve to adversaries and assurance to allies. 127 State or non-state disinformation campaigns might also be deployed in conjunction with other AI-augmented non-kinetic/political (for example, cyberattacks, deep fake technology, or disinformation campaigns via social media amplified by automated bots) or kinetic/military (see, for example, drone swarms, missile defence, anti-satellite weapons, or hypersonic weapons) actions to distract decision-makersthus, reducing their response time during a crisis and conferring a tactical or operational advantage to an adversary. 128 For example, in the aftermath of a terrorist attack in India's Jammu and Kashmir in 2019, a disinformation campaign (see, for example, fake news and false and doctored images) that spread via social media amid a heated national election, 129 inflamed emotions and domestic-political escalatory rhetoric, that in turn, promoted calls for retaliation against Pakistan and brought two nuclear-armed adversaries close to conflict. 130 This crisis provides a sobering glimpse of how information and influence campaigns between two nuclear-armed adversaries can affect crisis stability and the concomitant risks of inadvertent escalation. In short, the catalysing effect of costly signalling and testing the limits of an adversary's resolve (which did not previously exist) to enhance security instead increases inadvertent escalation risks and leaves both sides less secure.
The effect of escalatory imbued rhetoric in the information ecosystem can be a double-edged sword for inadvertent escalation risk. On the one hand, public rhetorical escalation can mobilise domestic support and signal deterrence and resolve to an adversarymaking war less likely. On the other hand, sowing public fear, distrust (for example, confidence in the legitimacy and reliability of NC3 systems), and threatening a leader's reputation and image (for example, the credibility of strategic decision-makers and robustness of nuclear launch protocols) domestically can prove costly, and in turn, may inadvertently make enemies of unresolved actors. For example, following the Hague's Permanent Court of Arbitration ruling against China over the territorial disputes in the South China Seas in 2016, the Chinese government had to resort to social media censorship to stem the flood of nationalism, calling for war with US ally the Philippines. 131 Furthermore, domestic public disorder and confusioncaused, for example, by a disinformation campaign or cyberattackscan in itself act as an escalatory force, putting decision-makers under pressure to respond forcefully to foreign or domestic threats, to protect a states' legitimacy, self-image, and credibility. 132 These rhetorical escalation dynamics can simultaneously reduce the possibility for facesaving de-escalation efforts by either sideanalogous to Thomas Schelling's 'tying-hands 127 Rose McDermott, Anthony Lopez, and Peter Hatemi, '"Blunt not the heart, enrage it": The psychology of revenge and deterrence', Texas National Security Review, 1:1 (2017), pp. 68-89. 128 Today, given the incipient nature of these technologies, there are active debates about the impact of social media on the dissemination and diffusion of (true and false) information on social media. For example, and contrary to conventional wisdom, while robots tend to accelerate the spread of both true and false news, false news (especially political in nature) spreads more than the truth because humans, not robots, are more predisposed to spread it. Soroush Vosoughi, Deb Roy, and Sinan Aral, 'A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth ', Science, 359:6380 (2018', Science, 359:6380 ( ), pp. 1146 Most notably, disinformation spread via Facebook's WhatsApp that falsely claimed that a leader of the Indian National Congress Party had offered a bribe to the suicide bomber's family. Despite Facebook's efforts to contain the nefarious campaign, the misinformation was disseminated to more than 2.8 million Facebook users. Neha Thirani Bagri, 'Back story: When India and Pakistan clashed, fake news won', Los Angeles Times (15 March 2019), available at: {https://www.latimes.com/ world/la-fg-india-pakistan-fake-news-20190315-story.html}. 130 Social media disinformation campaigns can impact information flow in a crisis in two ways: it undermines the transmission of information from the decision-makers to the public and from the public to decision-makers. In combination, these dynamics can impair the timeliness, reliability, and accuracy of publicly disseminated information. Trinkunas,Lin,and Loehrke,Three Tweets to Midnight, During heightened tensions between the United States and North Korea in 2017, for instance, the Trump administration's heated war of words with Kim Jong Un, whether a madman's bluff or in earnest (or 'rattle the pots and pans') raised the costs of Kim Jong Un backing down (that is, with regime survival at stake), thus increasing inadvertent escalation risk, and simultaneously, complicating de-escalation. 134 Because of the fear that its nuclear (and conventional) forces are vulnerable to a decapacitating first strike, rhetorical escalation between a conventionally inferior and superior state is especially dangerous. 135 Research would be beneficial on how the contemporary information ecosystem might affect decision-making in different political systems.
Ultimately, states' willingness to engage in nuclear brinkmanship will depend upon information (and mis/disinformation), cognitive bias, and the perception of, and the value attached to, what is at stake. Thus, if one side considers the potential consequences of not going to war as intolerable (that is, regime survival, the 'tying-hands', or 'use it or lose it' pressures), then offramps, firebreaks, or other de-escalation measures will be unable to prevent crisis instability from intensifying. 136 Finally, to the extent to which public pressure emanating from the contemporary information environment affects whether nuclear war remains 'special' or 'taboo' will be critical for reducing the risk of inadvertent escalation by achieving crisis stability during a conventional war between nuclear-armed states. Future research would be beneficial (a) on how the digitised information ecosystem affects decision-making in different political regimes; and (b) the potential effect of asymmetry and learning in the distribution of countries with advanced AI-capabilities and dynamics associated with its adoption. Will nuclear-armed states with advanced AI-enabled capabilities treat less advanced nuclear peers that lack these capabilities differently? And how might divergences in states synthesis and adoption of military AI contribute to misperception, miscalculation, and accidents?

Policy implications
How can decision-makers mitigate the inadvertent escalation risks associated with AI and nuclear systems? Possible ways forward include, inter alia, arms control and verification, changes to norms and behaviour, unilateral measures and restraint, and bilateral and multilateral stability dialogue. AI technology is already raising a multitude of questions about warfare and shifts in the balance of power, which are challenging traditional arms control thinking. 137 Traditional arms control and non-proliferation frameworks of nuclear governance are not necessarily obsolete, however. 138 Instead, we will need to depart from conventional siloed, rigid, and stove-piped approaches and search for innovative frameworks and novel approaches to meet the challenges of the rapidly evolving dual-use technology, the linkages between conventional and nuclear weapons, and the informational challenges in the new nuclear age. An asymmetric arms control framework emphasises the importance of dynamismallowing for mutual adjustment in force posture in ways that differ from the traditional 'like-for-like' approach to arms controlin designing such agreements would be a sensible starting point. 139 133 The 'tying-hands mechanism' refers to the idea that states seek to increase the credibility of their threats and resolve by taking costly actions (that is, 'go public' with its threats and demands) that increase the costs of backing down were the other side to counter-escalate but which would otherwise incur few costs. See Schelling, Arms and Influence.

134
Recent discussion about AI technology (especially lethal autonomous systems) and arms control has focused on how military AI might be managed, restricted ('keeping humans in the loop'), or prohibitedfor targeting and use of nuclear weapons. 140 AI could, perhaps counterintuitively, also offer innovative solutions to develop new and revise legacy arms control frameworks and contribute to non-interference mechanisms (NTM) for arms control verificationreducing the need for 'boots on the ground' inspectors in sensitive facilities. 141 For instance, AI object identification applications to augment satellite imagery of missile production facilities or test ranges, and pattern recognition tools (that detects anomalies from vast amounts of data), 142 might be used to support arms verification efforts, identify cheating behaviour under an arms control agreement, assess the nature of suspicious military movements, and in turn, enhance the credibility of future strategic arms control agreements. 143 Authentic, verified, and reliable open-source information should also be leveraged to support these gathering and analysis efforts.
The use of AI-augmentation to enhance states' early warning and detection systems to improve target identification might prevent false positives (or nuclear 'close calls'), 144 reduce bias, and improve the understanding of an adversary's actions (or reduce the 'fog of war'), thus lowering the risk of inadvertent escalation during a crisis. 145 Also, incorporating AI into early warning systems may be particularly stabilising for countries that do not have the advanced satellites, sensors, and forward-deployed radar systems that the US and Russia have developed to ensure missile launches are detected and assessed for threat potential. 146 AI-augmented early warning and detection systems could, for instance, offer Beijing improved transparency and confidence about US military operations in the Indo-Pacific (that is, to discern and discriminate between nuclear and conventional weapons systems in an incoming attack), thus reducing inadvertent escalation dynamics caused by miscalculation, false positives, or surprise. 147 AI technology could also improve the safety of nuclear systems. For instance, it could increase the security and robustness of command-and-control cyber defences by identifying undetected vulnerabilities or other potentially undiscovered weaknesses. For example, to differentiate between the launch of a demonstration rocket and an ICBM (which have similar radar signatures) rapidly and accurately or to discern a conventional from a nuclear missile launch. Besides, AI-supported big data analytics could be used to collate and process electronic data (that is, signals, imagery, and open-source information) to identify patterns of behaviour and launch profiles unique to specific types of capabilities. be used to identify vulnerabilities in conventional military systems. 148 Success in efforts such as these might also help bolster nuclear deterrence and mitigate inadvertent (and accidental) escalation risk. AI could also support defence planners' design and manage wargaming and other virtual training exercises to refine operational concepts, test various conflict scenarios, and identify areas and technologies for potential development. 149 Thus, enabling participants to better prepare against adversaries in unpredictable, fast-moving environments and where unpredictable and counterintuitive human-machine and machine-machine interactions will inevitably take place. 150 Finally, expanding the topics and approaches for bilateral and multilateral initiatives such as confidence-building measures, should include the novel non-kinetic escalatory risks associated with complexity in the AI and the digital domain (see, for example, dis/misinformation, deepfakes, information sabotage, and social media weaponisation) during conventional crises and conflict involving nuclear-armed states. 151 Today, AI technology is not currently integrated into states' nuclear targeting, command and control, or launch systems; thus, a narrow window exists for nuclear powers (the P5 members as well as India, Pakistan, Israel, and NATO states) to agree on new principles, practices, and norms (for example, banning attacks on nuclear-armed states' NC3 systems), and enshrine these into international law. For instance, within the framework of the United Nations Convention on Certain Conventional Weapons and tailored to the specific features of the technology and coordinated by the UN Conference on Disarmament. 152 Specific measures might include prohibiting or imposing limits on the fusion of AI technology in nuclear command and control systems, autonomous nuclear-armed missiles, and nuclear weapons launch decisions. 153

Conclusion
To what extent might AI-enabled capabilities increase inadvertent escalation risk? In a global security environment characterised by great power strategic competition and regional strategic asymmetry (capabilities, domains, and stakes), new rungs, firebreaks, and thresholds on the escalation ladder are already challenging conventional assumptions of deterrence, strategic stability, and escalation. This article underscores the need for greater clarity and discussion on the specific characteristics of AI technology that may create new (or disrupt old) rungs on the metaphorical escalation ladder, and in turn, increase the risk of inadvertently transitioning crises between nuclear-armed (and especially regional) states from conventional to nuclear confrontation. The article builds on and adapts the foundational work on inadvertent escalation conducted at the end of the Cold War (on the cusp of the 'Second Nuclear Age'). Specifically, it examines the psychological underpinnings of escalation theorising to elucidate whether and how characteristics of AI technology, contextualised with the broader digital information ecosystem, might destabilise crisis stability, and increase inadvertent escalation risk.
The article highlights three critical features of inadvertent escalation risk in the emerging AI-nuclear strategic nexus. First, while nuclear-armed states have a shared interest in avoiding nuclear war (expressed by the 'rational deterrence theory'), they also place a high value on their nuclear forces, which advanced conventional weapons enhanced by AI technology inadvertently threatenespecially in asymmetric military situations. The synthesis of AI technology into conventional weapon systems to enhance autonomy, remote sensing, missile guidance, and situational awareness creates new tactical possibilities for a range of novel destabilising conventional counterforce possibilities (for example, cyberattacks on NC3 and locating survivable nuclear retaliatory capabilities). In regional asymmetric crisis or conflict, AI-powered tools to find, track, and destroy a state's nuclear forces may be viewed by the other side as a precursor to a disabling counterforce attack, thus increasing incentives to escalate a conventional situation inadvertently. Further, these technological developments have been accompanied by destabilising doctrinal shifts by several regional nuclear powers (Russia, North Korea, and possibly China), compounding the problem of commingled dual-use conventional and nuclear weapons missions at a strategic level.
Second, escalatory rhetoric and other aggressive behaviour, amplified by the digital information ecosystem, might be misperceived as unprovoked malign intentnot as a response to the initiator's behaviourleading to action and reaction spirals of potentially irrevocable inadvertent escalation. Cognitive and heuristic burdens caused by information overload and complexity will likely make decision-makers more susceptible to cognitive bias, misperceptions, and heuristics to approach complex problem solving. Also, new AI-enhanced tools are enabling a more comprehensive range of actors (state and non-state) with asymmetric techniques (for example, dis/misinformation and cyberattacks) to improve their bargaining hand by delegitimising (or legitimising) the use of nuclear weapons during an escalating situationor suing for peace or de-escalating a situation. The study also demonstrates the impact of escalatory rhetoric in the information ecosystem could be a double-edged sword for inadvertent escalation risk at the political decision-making level. On the one hand, public rhetorical escalation can mobilise domestic support and signal deterrence and resolve to an adversary, thus making war less likely. On the other hand, sowing public fear, distrust, and threatening a leader's reputation and image domestically can inadvertently make enemies of unresolved actors.
Third, and related, the state of heightened tension, uncertainty, complexity, and compressed decision-making (or 'the fog of war') of modern digitised warfare will likely be dramatically increased with AI technology's infusion. That is, restricting the options and time available for de-escalation, compounding the problem of offence-defence distinguishability, and increasing the risks of both horizontal and vertical inadvertent escalation. Moreover, the increasing dependencies and concomitant vulnerabilities on digital technologies (especially AI) and information to conduct modern warfare create a new security paradox. This 'paradox' could create resource vulnerabilities that generate first strike and pre-emption incentives, predicated upon use it or lose it' pressures, whether real or illusory.
In today's nuclear multipolar world with great power techno-military competition (US-China and US-Russia) and regional asymmetric dynamics there is a political imperative to address the challenges for inadvertent escalation by engaging in broader strategic stability talks about the development of new and innovative normative frameworks. 154 Multipolaritya function of political relationsexacerbates techno-military competition, which in turn, has important implications for strategic stability, deterrence, the security dilemma, and escalation dynamics described in this article. Against the backdrop of increasing populism, amplified and manipulated in the digital information ecosystem, long-term regional and global stability and nuclear security efforts will continue to be jettisoned in favour of short-termism, asymmetric and great power competition, political fragmentation, and asymmetric and great power competitionto gain the 'first-mover advantages' in areas such as AI, hypersonic weapons, and information warfare. 155 When the problem arises because of technological creativity, and compliance and verification are concerned with abstaining from future breakthroughs and potential uses of AI, technological solutions are unlikely to be an adequate responsethat is, the process is not reducible to structural processes and assumptions of 'rational' actors, agency matters. 156 Therefore, any solution to the emerging AI-nuclear strategic challenge must be a much a political as a technological one. Solutions will have to help adversaries 'escape an irrational situation where it is precisely rational [decision-making] behavior that may be most dangerous'reducing perceived vulnerability in the short-term at the expense of future inadvertent risk. 157