1. Introduction
Technology companies have emerged as powerful actors in the calculus of war and peace, as artificial intelligence (AI) systems are increasingly influencing state-level decision making on the resort to force (Erskine & Miller, Reference Erskine and Miller2024). The evolving literature on society-technology interactions has introduced concepts such as technology ecosystems, platforms, and stacks, which can help us to better understand the impact of AI on resort-to-force decision making. This article synthesises these concepts under the umbrella of “architectures of AI,” focusing on the growing influence and power of technology companies – now largely synonymous with AI firms – in military decision making and state deliberations over the use of force. These firms form an “infrastructural core” (van Dijck, Poell & de Waal, Reference van Dijck, Poell and de Waal2018), which controls global information flows and services, thereby diminishing the capacity of nation states to make independent decisions on matters of war and peace.
As Erskine (Reference Erskine2024, p. 175) argues, the pervasive nature of AI suggests that “AI-driven systems will … increasingly influence the … consequential step of determining if and when a state engages in organised violence. In short, AI will infiltrate the decision to wage war”. To date, power has been accumulated across each of the “architectures of AI.” This article considers possibilities for how companies may wield that power in society and in relation to resort-to-force decision making. This article shows how technology companies are not merely providers but have become de facto national security actors, capable of impacting state sovereignty through direct coercion, indirect influence, or the strategic offering of incentives. Specifically, it sets out how US technology companies have an increasing capacity to coerce, influence or incentivise the decisions of smaller nation states, including in resort-to-force decision making.
This article contends that the accumulation and exercise of power within the architectures of AI are fundamentally altering the political calculus and practical realities of going to war. Specifically, it examines three interrelated dimensions: (i) the concentration of power in technology and digital infrastructure, (ii) the diffusion of national security decision making, and (iii) the role of AI in shaping public opinion. Taken together, these factors degrade the autonomy of nation states, particularly smaller or technologically reliant ones, in making sovereign resort-to-force decisions. While the analysis is grounded in an Australian perspective, its implications are relevant to liberal democracies globally.
2. The “architectures of AI”: looking across the stack
This section outlines the often colloquially used term “tech (technology) stack” as well as the infrastructure underlying and essential for AI systems. I refer to these collectively as the “architectures of AI.” Considering these components as architectures across the technology stack enables examination of actors (including companies and individuals) who are involved in AI. It also allows exploration of how changes to underlying infrastructures, architectures and applications of AI shape resort-to-force and national security decisions.
Academics and pioneers in the field define AI in different ways (Collins, Dennehy, Conboy & Mikalef, Reference Collins, Dennehy, Conboy and Mikalef2021). Different definitions have also been given to guide the research and development of AI policy and regulations. Moreover, AI definitions evolve over time. Early definitions tended to include comparisons to human intelligence, whereas more recent definitions tend to focus more specifically on AI’s autonomy and goal-directed behaviour, as well as the acknowledgement of specific capabilities, including perception, reasoning, learning and problem-solving (Whelan, Hammond-Errey & Villeneuve-Dubuc, Reference Whelan, Hammond-Errey and Villeneuve-Dubuc2024). This article invokes the Australian Department of Industry, Science and Resources (DISR) definition of AI as “an engineered system that generates predictive outputs such as content, forecasts, recommendations, or decisions for a given set of human-defined objectives or parameters without explicit programming” (DISR, 2024). The DISR definition of AI is also based on the respective International Organization for Standardization (ISO) definition (ISO/IEC 22,989:2022).Footnote 1 The technical definitions of AI are important because they are clearer and more precise than general and popular references to a variety of technologies.
We can also think about AI, machine learning, and algorithms in the context of a “tech stack.” The term “technology stack” or “tech stack” (also sometimes an “AI stack”) is often used to describe the layers or levels of technologies, tools and software that companies use to build and maintain their digital products and services (Tsaih, Chang, Hsu & Yen, Reference Tsaih, Chang, Hsu and Yen2023). It is referred to as a stack to help visualise the technologies being stacked on top of each other, from frontend to backend, to build an application. A stack can have many possible layers depending on the service or product. The AI tech-stack model is a conceptual frameworkFootnote 2 – it does not map onto specific systems (Tsaih et al., Reference Tsaih, Chang, Hsu and Yen2023) – but it does provide a lens through which to look at different layers of AI.
This article employs the term “technology companies” to refer to the entities driving contemporary AI development. In the current paradigm, AI and Big Tech are synonymous (Kak et al., Reference Kak, West and Whittaker2023). Almost exclusively, AI development is dependent on these firms. This includes the compute capacity, data holdings and market reach to scale and sell AI products. (Kak et al., Reference Kak, West and Whittaker2023). The distinction between “AI companies” and technology companies has thus largely dissolved, as the latter dominate the development and commercialisation of AI. Even ostensibly independent organisations, such as OpenAI, operate in close partnership with these firms (for example, with Microsoft). For these reasons, this article uses the broader term “technology companies” rather than “AI companies.”
This article considers the underlying infrastructure – or architecture – essential for AI: data, energy generation and access, compute capacity, connectivity and workforce. I refer to these collectively as the “architectures of AI.” The design of technology shows us where the power lies (Donovan, Reference Donovan2019), and the lens of architecture highlights how technology companies are entangled with politics and how they influence resort-to-force decision making. Understanding the architectures of AI allows us to observe accumulated power, consider the broader impact that technology companies have on policy and decision making, and identify how technology companies and government are enmeshed. Indeed, Rogers and Bienvenue (Reference Rogers and Bienvenue2021) argue that the layers of the technology stack act as gateways to accommodate influential gatekeepers who exert power over the flow of information across the stack.
3. Resort-to-force decision making and the architectures of AI
This article sets out how the architectures of AI have the capacity to influence nation-state deliberations on the resort to force. Thus, it is necessary to consider how such deliberations occur. This section provides a brief overview of the literature on technology, geopolitics, and resort-to-force decision-making. The empirical analysis regarding how such deliberations are made remains a complex and, in many respects, under-documented area of research. There is a substantial body of scholarship on the doctrinal and normative frameworks governing the use of force, such as legal and theoretical analyses in international law journals.Footnote 3 However, empirical studies that systematically examine the actual decision-making processes are comparatively scarce.
In international relations and law, much of the existing analysis focuses on the interplay between legal standards, risk assessment and other considerations, rather than on the empirical realities of how decisions to resort to force are made in practice. “States must make a variety of calculations when confronted with a decision about whether to use force against or inside another state. In use of force decisions, the divide between policy and legal doctrine is often disputed” (Deeks, Lubell & Murray, Reference Deeks, Lubell and Murray2019, p. 3). International law is itself indifferent towards domestic political or constitutional prerequisites to using force (Hendell, Reference Hendell2023).Footnote 4 The literature on resort-to-force deliberations, whether about inputs or actual decision-making processes, is surprisingly slim outside the US. Existing works, largely focus on historical case studies, primarily in the US, nevertheless provide helpful insights. For example, Whitlark (Reference Whitlark2021), writing in the context of nuclear force, concludes that executive perspective and individual personality – not institutional structure – are paramount to such resort-to-force deliberations.
Increasing attention has been afforded to how private corporations play a powerful role in geopolitical decision making. For instance, Sommer, Matania and Hassid (Reference Sommer, Matania and Hassid2023) write, “the digital frontier of nations changed, and with it the concept of national security.” They argue that there can no longer be an exclusive focus on territory and people. The concept of national security (like sovereignty and jurisdiction) is now heavily intertwined with tech companies and AI success.Footnote 5 To escalate an issue to a matter of national security is inherently a political choice. National security is also a matter of prudential value: a condition which must be maintained against others’ potential to degrade it (Gyngell & Wesley, Reference Gyngell and Wesley2007).
Neilsen and Pontbriand (2025) consider how cyberattacks against privately owned and operated civilian critical infrastructure challenge the notion (so fundamental to liberal democracies) that war is premised on a strict delineation between military and civil domains. Their focus is on how NATO members understand their role in protecting civilian critical infrastructure. This article provides an excellent foundation for understanding the relationship between private and public in individual infrastructure and cybersecurity. However, it is less helpful for understanding power asymmetries and the impact of companies or individuals attempting to leverage the nation state and governing functions.
Less has been said about how AI powerbroking cultivates a climate in which private corporations have the capacity to influence, coerce or incentivise resort-to-force decision making by nation states. Kelton et al. (Reference Kelton, Sullivan, Rogers, Bienvenue and Troath2022) suggest that US digital platforms, as exclusive service providers, seem to acquire some of the extractive and transformative power traditionally ascribed to the sovereign state and, moreover, sit beyond the sovereign state’s capacity to regulate and control. Rogers and Bienvenue (Reference Rogers and Bienvenue2021, p. 96) argue that the layers of the technology stack act as gateways accommodating influential gatekeepers who exert power across it. In previous work, I have showed that the big data landscape, which includes AI, “will continue to alter national security by changing who has information and power to change and influence aspects of society” (Hammond-Errey, Reference Hammond-Errey2024, p. 182).
Schaake (Reference Schaake2024b np) writes, “the involvement of Big Tech companies in active military conflicts raises tough questions about the concept that underpins the foundations of international relations and international law: state sovereignty.” Schaake (Reference Schaake2024b) notes that companies including Google, Microsoft and SpaceX have few, if any, legal mandates according to international law as they are private actors. “Companies are playing an ever more critical role in this strange cyber dividing line between war and peace” (Schaake, Reference Schaake2024b). This role is often considered in the context of geopolitics and state legislation and international affairs, as set out below:
Yet companies… exude sovereign power in new ways. They have monopolies on key insights and data analytics and make decisions about affairs that were once the exclusive domain of states, while these companies are not subject to comparable checks and balances. Moreover, companies that operate at a global scale often chafe against geographic borders. Even when governments want to exert control over such companies … they face a variety of constraints (Schaake, Reference Schaake2024b).
This article moves beyond observing that technology companies are involved in geopolitics and instead highlights how this intersects with AI and resort-to-force decision making. There is a growing body of literature that looks at the use of technologies and AI in intelligence, which of course influences resort-to-force decision making. Intelligence – “knowledge vital for national survival” (Kent, Reference Kent1966, p. vii) – forms an important input into resort-to-force deliberations. The impact of big data and AI on national security and intelligence production plays a significant role in framing and deliberating on national security matters (Hammond-Errey, Reference Hammond-Errey2024). Hershkovitz (Reference Hershkovitz2022) and Zegart (Reference Zegart2022) outline the transformative nature of AI to the practice of intelligence and its input into resort-to-force decision making.
Krasner (Reference Krasner1999) makes the argument that the key attributes of sovereignty are never perfectly present. “There has never been some ideal time during which all, or even most, political entities conformed with all of the characteristics that have been associated with sovereignty – territory, control, recognition, and autonomy” (Krasner, Reference Krasner1999, p. 235). Alternative principles, such as human rights, fiscal responsibility and international security have been used to challenge autonomy. Krasnow (Krasner, Reference Krasner1999, p. 236) notes how “in the absence of any well-established hierarchical structure of authority, coercion and imposition are always options that the strong can deploy against the weak.”
Indeed, the term “technology sovereignty” has emerged as a consequence of the power of technology companies. Edler, Blind, Kroll and Schubert (Reference Edler, Blind, Kroll and Schubert2023, p. 1) argue that “technology sovereignty should be conceived as a state-level agency within the international system, i.e., as sovereignty of governmental action, rather than (territorial) sovereignty over something.” In the context of the architectures of AI, the impact of the sovereignty of governmental action includes deliberation processes on the resort to force. This includes deliberation on the role of (and access to) largely civilian digital infrastructure in conflict, especially if owned and/or operated by a foreign state. It also includes deliberation on ensuring access to civilian and military technology as well as underlying technological infrastructure such as connectivity, energy generation and access, compute capacity, and components such as advanced semiconductors.
This article argues that technology companies are national security actors and could use AI to influence resort-to-force decision making. Contemporary power is exercised in and through the architectures of AI. Companies that have monopolised the big data landscape of data abundance, digital connectivity and ubiquitous technology (Hammond-Errey, Reference Hammond-Errey2024) have centralised economic power and attained near-sovereign state-like status (Schaake, Reference Schaake2024a, Reference Schaake2024b; Khanal et al., Reference Khanal, Zhang and Taeihagh2024). Coupled with the inextricable reliance that governments have on these companies and technologies, the unprecedented scope and concentration of power in technology companies means that most nation states have, or will have, less autonomy in resort-to-force decisions. This could occur directly through coercion, indirectly as influence, or through incentivisation at any point, such as during the development of capabilities, pressure during conflict, through revocation of services, or even supporting potential adversaries. These will be explored throughout this article.
4. Architectures of AI: implications of concentration of power, diffused decision making and public opinion
This section considers the concentration of power across architectures of AI, data, energy generation and access, compute capacity, connectivity and workforce. It then explores areas of influence in resort-to-force deliberations across three important areas of focus: the concentration of power, the diffusion of national security decision making, and the role of AI and the information environment in shaping public support. The section reveals that because of the scope and concentration of power in technology companies, most nation states have, or will have, less autonomy in resort-to-force decisions, either directly through coercion, indirectly as influence or through incentives.
Digital infrastructure is the backbone of our societies. The technology ecosystem is dominated by a small number of companies across all levels of the architectures of AI. This concentrates information flows, critical data sets, computing power, and technical capabilities (Andrejevic, Reference Andrejevic2013; Cohen, Reference Cohen2017; Edward, Reference Edward2020; Moore, Reference Moore2016; van Dijck et al., Reference van Dijck, Poell and de Waal2018) that are essential for functioning democracies (Richmond, Reference Richmond2019; Watts, Reference Watts2020). The dominance of these companies has handed them scale and influence akin to nation states (Lehdonvirta, Reference Lehdonvirta2022, p. 3), challenging the primacy of governments (Eichensehr, Reference Eichensehr2019). This, in turn, is transforming the relationships that companies have with nation states, challenging conceptions of national security and ultimately (knowingly or unknowingly) influencing state decisions on the resort to force. This section first outlines the accumulation of power across the architectures of AI. It then considers diffused decision making. It subsequently explores the implications of this power concentration on resort-to-force decision making, either directly through coercion, indirectly as influence or through incentivisation. Finally, it outlines how AI has the capacity to shape public opinion, impacting, if not diminishing, state capacity to engage with its citizens.
4.1. Data
Data is the foundation of AI. The technology ecosystem has been dominated by a small number of companies, concentrating information flows, critical data sets and technical capabilities for analysis (Andrejevic, Reference Andrejevic2013; Cohen, Reference Cohen2017; Edward, Reference Edward2020; Moore, Reference Moore2016; van Dijck et al., Reference van Dijck, Poell and de Waal2018). The “epicenter of the information ecosystem that dominates North American and European online space is owned and operated by five high-tech companies, Alphabet-Google, Facebook, Apple, Amazon, and Microsoft” (van Dijck et al., Reference van Dijck, Poell and de Waal2018, p. 6). These companies have monopolised aspects of the big data landscape of data abundance, digital connectivity and ubiquitous technology (Hammond-Errey, Reference Hammond-Errey2024).
The constellation of technologies that constitute AI (Bell, Reference Bell2018) require massive amounts of data. The companies that dominate the consumer-driven big data economy and own most of the data in the West include Alphabet (Google), Facebook, Amazon (Moore & Tambini, Reference Moore and Tambini2021; Neef, Reference Neef2014; Sadowski, Reference Sadowski2020; Zuboff, Reference Zuboff2019), Microsoft (Moore & Tambini, Reference Moore and Tambini2021; Neef, Reference Neef2014; Zuboff, Reference Zuboff2019), Apple, Alibaba (Verdegem, Reference Verdegem2022), Twitter/X (Neef, Reference Neef2014), Uber (Neef, Reference Neef2014; Sadowski, Reference Sadowski2020), TikTok and Tencent (Verdegem, Reference Verdegem2022). The vast majority of digital data is sold and resold for profit (Kitchin, Reference Kitchin2014; Sadowski, Reference Sadowski2020; Zuboff, Reference Zuboff2019). Data and the ability to analyse it give substantial market power only to the largest online platforms Santesteban and Longpre (Reference Santesteban and Longpre2020).
4.2. Compute capacity
Ownership of data storage and computational capacity is highly concentrated. It is estimated that Amazon, Microsoft, and Google together hold 70 percent of the global cloud infrastructure market, which includes the market for graphics processing unit (GPU) compute used in AI research (Lehdonvirta, Reference Lehdonvirta2023). “There are only a few companies that own exponential computing power, can attract AI talent, and have access to data to develop and train advanced machine/deep learning models” (Verdegem, 2021). Moreover, the computational capacity that these platforms rely on is geographically concentrated (Ghahramani, 2023). The US and China have the most public GPU clusters in the world. China leads in the number of GPU-enabled regions overall; however, the most advanced GPUs are highly concentrated in the US. The US has eight “regions” where H100 GPUs – the kind that are the subject of US government sanctions on China – are available to hire (Perrigo, Reference Perrigo2024). In 2021, the US held a dominant pole position with 33 percent of global data centres, and the broader Organization for Economic Co-operation and Development (OECD) housed 77 percent of data centres (Daigle, Reference Daigle2021).
A similar story can be told with high-performance computers. As of June 2024, the first three and five of the top 10 most powerful non-distributed computer systems in the world are based in the US (Top500 2024, 2024). Additionally, across the top 500, the US accounts for 34.2 percent of the system total and 53.7 percent of the performance total (Top500 2024, 2024). Australia reportedly had 0.88 percent of the world’s computing capacity as of November 2022, and the United Kingdom 1.3 percent – while the top five countries (the United States, Japan, China, Finland and Italy) had 79.1 percent (Top500, 2022). Lehdonvirta, Wu and Hawkins (Reference Lehdonvirta, Wu and Hawkins2024, np) observe a global compute divide, in which “the geography of AI compute seems to be reproducing familiar patterns of global inequality.” The future of compute capacity is likely to continue to be geographically concentrated (Ghahramani, 2023).
4.3. Connectivity and infrastructure
The infrastructure of AI is predominantly owned by commercial entities, meaning that the data – and the ability to derive insights from it – largely resides in the private sector. Much of it is commercially available for purchase, and the analytical capabilities of big data have largely been built by – and reside in – industry (Crain, Reference Crain2016; Kitchin, Reference Kitchin2014). Despite offering a variety of different services (Lotz, Reference Lotz2018), a small handful of commercial entities have most of the world’s data, information flows (Moore & Tambini, Reference Moore and Tambini2021; Neef, Reference Neef2014; Omand & Phythian, Reference Omand and Phythian2018, p. 145; Zuboff, Reference Zuboff2019), and computing capacity (Lehdonvirta, Reference Lehdonvirta2023; Verdegem, Reference Verdegem2022).
Alphabet-Google, Facebook, Apple, Amazon, and Microsoft are therefore able to control the node of global information services (van Dijck et al., Reference van Dijck, Poell and de Waal2018). These companies do so in a way that was previously limited to telecommunications companies – assets that were historically government owned (Howell & Potgieter, Reference Howell and Potgieter2020). Most internet users – nation state governments included – are dependent on these companies for their infrastructural information services (Cohen, Reference Cohen2017; Moore, Reference Moore2016; van Dijck et al., Reference van Dijck, Poell and de Waal2018), including for computing (Lehdonvirta, Reference Lehdonvirta2023). This can be seen in the context of AI development specifically:
In the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms’ vast consumer market reach to deploy and sell their AI products (Kak et al., Reference Kak, West and Whittaker2023).
Telecommunications infrastructure and undersea cables, which carry up to 99 percent of global internet traffic (Kelton et al., Reference Kelton, Sullivan, Rogers, Bienvenue and Troath2022), are critical to AI. Over the past decade, there has been a shift towards subsea cables built by large individual technology companies. Meta recently announced Project Waterworth, which plans to connect the US, India, South Africa, Brazil, Australia and other regions via a 50,000 km (31,000-mile) cable system. In addition, Big Tech companies have ownership of most cloud infrastructure and are involved in projects from low Earth orbit satellites to laser data transmission (Kelton et al., Reference Kelton, Sullivan, Rogers, Bienvenue and Troath2022). This challenges digital sovereignty in three ways, by impacting the ability of a country to control its technological infrastructure, its data (data security), and ability to provide internet-reliant services (Ganz, Camellini & Hine et al., Reference Ganz, Camellini and Hine2024).
4.4. Energy generation and access
Since 2024, the role of tech companies in global energy generation and access has been elevated to a national security issue. While there is limited scholarship to date, policy and public discussion identify energy generation and access as key for AI. To state the obvious, “there is no AI without energy” (IEA [International Energy Agency], 2025). Leaders at COP29 debated the challenge of curbing AI emissions and domestically in the US, an increasing focus on the intersection between AI and energy, generation and access. For example, a September 2024 White House Roundtable considering the role of energy in AI InfrastructureFootnote 6 and October 2024 Memorandum on AI.Footnote 7 On 23 May 2025, the White House released an Executive Order that, among other things, designated AI data centres as “critical defence facilities” and the nuclear reactors powering them as “defence critical electric infrastructure.” It also mandates the rapid deployment of advanced nuclear technology to power AI infrastructure.Footnote 8 In July 2025, America’s AI Action plan was released, which directly linked AI infrastructure and the energy to power it, noting “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today” (White House, 2025 p. 14).Footnote 9
4.5. Workforce
The technology workforce for AI and its underlying architectures is an area of emerging scholarly interest. “There are only a few companies that own exponential computing power, can attract AI talent, and have access to data to develop and train advanced machine/deep learning models” (Verdegem, 2021). The impact of the AI workforce component, or “tech talent” as it is sometimes referred to, is increasingly considered within policymaking circles. For example, as part of the 2023 Quad Tech Network meeting, Koslosky (Reference Koslosky2023) explored the domestic and global shortages of the science, technology, engineering, and mathematics (STEM) workforce necessary for AI. While maintaining a STEM workforce is critical, it is an often-overlooked component of strengthening national critical technologies capabilities (Hammond-Errey, Reference Hammond-Errey2023). Moreover, as Assaad and Hammond-Errey (forthcoming 2025) note, “there is limited diversity in employer options for those with the skill sets to work on AI. This means knowledge around AI systems becomes siloed and sparse.” Artifical intelligence researchers revealed the power, influence and dominance of the technology companies in AI research with limited public interest alternatives (Ahmed, Wahed & Thompson Reference Ahmed, Wahed and Thompson2023)
4.6. Economic power
Economic power is a result of the concentration of power across the architectures of AI listed above. Economic power will be covered only briefly here due to the scope of this article; however, it is a topic of increasing scholarly and policy interest. The top five American technology companies (Alphabet-Google, Apple, Meta, Amazon, and Microsoft) have also amassed unprecedented economic power (Lee, Reference Lee2021; Moore, Reference Moore2016; Fernandez et al., Reference Fernandez, Klinge, Hendrikse and Adriaans2021; Santesteban & Longpre, Reference Santesteban and Longpre2020). They had cumulative revenues of US$1.1 trillion in 2022, although their market capitalisation has dropped from a high of US$9.5 trillion in 2021 to US$7.7 trillion in April 2023 (Lee, Reference Lee2021; Wall Street Journal, 2023abcde). Their combined market capitalisation in 2021 was more than six times the size of Australia’s gross domestic product (GDP; US$1.55 trillion), while their revenues were almost twice the total revenue of the Australian governments (US$586 billion) in the same year (Australian Bureau of Statistics, 2023; World Bank, 2023). The market for military AI is lucrative and growing and, in the US alone, is projected to increase to 38.8 billion USD by 2028 (Schwarz, Reference Schwarz2024).
Stucke and Grunes (Reference Stucke and A.2016) set out a range of factors needed to assess digital platform economic power, which include considerations relating to data, algorithms, network economies, economies of scale and scope, coordination, limiting competition and data transferability. The value of goods that passed through Amazon in 2022 (US$514 billion) exceeded the GDP of many countries – its 2021 revenue figure would place it in the top 30 countries for GDP (Amazon, 2023, p. 23; World Bank, 2023). Amazon’s merchant fees in 2022 brought in more revenue (US$117 billion) than most states do through taxation (Lehdonvirta, Reference Lehdonvirta2022, p. 3). In some cases, these companies have taken on key functions akin to the judicial systems of nation-states – eBay rules on more financial disputes (60 million) per year than any courts outside the United States hear on an annual basis (Lehdonvirta, Reference Lehdonvirta2022, p. 3).
4.7. Technology companies have accumulated power and governments are critically dependent
Technology companies are central to functioning democratic societies. In addition to the concentration of power outlined above, nation-state governments are critically dependent on Big Tech for the provision of government services. It is in the domain of US state and Big Tech relations that the consequences of the power concentration are already being felt. “Big Tech has become omnipresent and omnipotent in the policy process” (Khanal et al., Reference Khanal, Zhang and Taeihagh2024, p. 12). Their influence and resources result in semi-autonomous and semi-sovereign entities. States are also beginning to treat Big Tech like sovereign actors (Khanal et al., Reference Khanal, Zhang and Taeihagh2024). The immense power of technology companies can be seen in the geopolitical implications of their choices.
A wide range of companies are at the forefront of many national security threats, from data security and cyber security to telecommunications and critical infrastructure. Many of Australia’s government agencies are heavily reliant on digital and technology services from the dominant players, ranging from the provision of data cloud services, email and office systems to AI application trials. In July 2024, the Australian Government and Amazon announced an AU$2 billion partnership to build a data cloud to store classified Australian military and intelligence information. This will see secure data centres built in secret locations across the country to support the purpose-built Top-Secret Cloud, which will be run by a local subsidiary of Amazon Web Services (Greene, Reference Greene2024).
Decision making about national security continues in government. However, it also increasingly occurs in the private sector and especially within technology companies. Many of these decisions are consequential. As the Australian Privacy Commissioner Carly Kind said, “the major platforms are shaping society, they are not intermediaries” (Kind, Reference Kind2024). This creates new dynamics regarding commercial decisions within companies as well as between government agencies, policymakers, and private-sector companies. These dynamics – and the competing tensions and complexities – are playing out in contemporary global conflicts.
This concentration of power presents new challenges, strategic and operational, to state resort-to-force decision making. This article shows that the concentration of power across the architectures of AI is indisputable and unlike any of the challenges to state sovereignty envisioned by Krasner (Reference Krasner1999). The relationship between the US nation state and US-based technology companies is changing. Key technology actors are actively shaping the second Trump administration’s technology posture and relationship to technology. Military and intelligence apparatuses cannot function without key technology companies. The latter control tools (among them, cloud systems or AI algorithms aimed at image and sound recognition, behaviour prediction and military targeting) are essential for surveilling adversaries (and “allies”) and, if needed, to anticipate their moves on the battlefield (Coveri, Cozza & Guarascio, Reference Coveri, Cozza and Guarascio2025; Gawer, Reference Gawer2022).
Increasingly, this accumulated power is intersecting with political and governance processes in the US. For example, Meta, Amazon, Google, Microsoft and Uber – as well as the CEOs of OpenAI and Apple – all made unprecedented tech donations of $1 million each to President-elect Donald Trump’s inaugural fund (Harwell, Reference Harwell2025). These same companies advocated for the Executive Order Removing Barriers to American Leadership in Artificial Intelligence signed by President Trump on 23 January 2025. The order sees the dismantling of AI oversight and prioritises commercial dominance over collaborative global governance (Shafiabady & O’Neil, Reference Shafiabady and O’Neil2025). Four senior technology executives from Meta, OpenAI and Palantir were also sworn in as senior US Army officers (Nieberg, Reference Nieberg2025). Another example of the role of Big Tech is Elon Musk’s advisory role in the Trump administration and his position as a co-chair of a “government efficiency” panel, which resulted in an unprecedented concentration of power and conflicts of interest (Stone, Reference Stone2024). X and Meta “are using this power in new ways that declare war on facts, denigrate groups of people,Footnote 10 enhance vulnerabilities for state-sponsored interference,Footnote 11 influence foreign politics and decrease inclusion and participation” (Hammond-Errey, Reference Hammond-Errey2025). Musk has also used X to endorse and promote far-right political candidates – subsequently designated as extremists – in the United Kingdom and Germany (Kelly, Reference Kelly2025).
Contemporary power is accumulated in and through the architectures of AI. The architectures of AI are predominantly owned by technology companies, meaning that data and the ability to derive insights from it reside largely in the private sector or are available for purchase (Crain, Reference Crain2016; Kitchin, Reference Kitchin2014). This concentration of power pervades energy generation and access, as well as connectivity and workforce. It also combines with the existing concentration of economic power as well as policy and decision-making influence. Thus, in the not-so-distant future, it may well be impossible for nation state governments to operate their national infrastructure without service provision from AI technology companies. This leaves states open to companies influencing resort-to-force decision making, either directly through coercion or indirectly through forms of influence on policy processes and incentivisation.
5. The diffusion of national security decision making
The architectures of AI are hastening an existing trend: decision making about national security and decisions that impact national security continue within government but are also increasingly occurring outside government (Hammond-Errey, Reference Hammond-Errey2024). This section shows how the influence of decision makers in AI technology companies has diffused national decision making and created new national security actors. This, in turn, has impacted national security and decision-making processes, including on the resort to force. The architectures of AI are predominantly owned by commercial entities (the data – and the ability to derive insights from it – largely reside in the private sector or is available for purchase) (Crain, Reference Crain2016; Kitchin, Reference Kitchin2014). Technology companies are increasingly influencing government decision making, including on matters of war and security and are heavily involved in public policy and shaping policy processes (Khanal et al., Reference Khanal, Zhang and Taeihagh2024). Their influence and resources result in semi-autonomous and semi-sovereign entities. States are also beginning to treat Big Tech like sovereign actors (Khanal et al., Reference Khanal, Zhang and Taeihagh2024). These are all signs of nation states recognising the power technology companies have accumulated and foreshadow the potential they have to exercise it by influence, coercion, or incentivisation.
Cyberspace raises questions about who counts as a national security decision maker and how intelligence communities should interact with them (Zegart, Reference Zegart2022, p. 274). Further, companies create, use and control vast troves of personal data (Birch & Bronson, Reference Birch and Bronson2022), data storage and computational capabilities (Lehdonvirta, Reference Lehdonvirta2023), and information flows (Santesteban & Longpre, Reference Santesteban and Longpre2020). They have billions of users (Kemp, Reference Kemp2023) from whom they collect data and over whom they also have varying degrees of influence (Davidson, Reference Davidson2017; Griffin, Reference Griffin2019; Zuboff, Reference Zuboff2019). Social reliance on digital infrastructure and providers of digital services means that many companies are a potential attack surface for national security threats in addition to being, increasingly, national security decision makers themselves (Hammond-Errey, Reference Hammond-Errey2024). Now, heads of companies – often in foreign countries – are national security actors. This adds a new dimension to decision making about war. For countries outside the US, this requires regulating technology and digital infrastructure while ameliorating citizen concerns about online harms and simultaneously considering government reliance on US technology companies. Technology companies have become important actors in modern conflicts, such as supporting Ukraine since the Russian invasion in February 2022 (Bresnick, Luong & Curlee, Reference Bresnick, Luong and Curlee2024).
The immense power of technology companies can be seen in the geopolitical implications of their choices. A wide range of companies are at the forefront of many national security threats, from data security and cyber security to telecommunications and critical infrastructure. Decision making about national security continues in government; however, it also increasingly occurs within technology companies. Many of these decisions are consequential. Coveri, Cozza & Guarascio (Reference Coveri, Cozza and Guarascio2025, p. 2) argue the interdependence between governments and technology companies… “challenges the traditional distinction between the state and the market, blurring their boundaries and, most importantly, questioning the willingness (and ability) of the former to control (and discipline) the latter in the collective interest.” Furthermore, according to military scholars, key tech figures and companies are “hyping AI’s role in war” (Lushenko & Carter, Reference Lushenko and Carter2024). This creates new dynamics between commercial and national security decisions within companies as well as between government agencies, policymakers and private-sector companies. These dynamics – and the competing tensions and complexities – are playing out in contemporary global conflicts, even as the global order is changing.
Increasingly, technology companies are making specific choices about which services to provide to individuals and organisations at war. If a service, which an individual or company is reliant on, is provided or removed – or removal is threatened – it clearly operates as a form of coercion or influence. As one research report sets out, over the course of Russia’s invasion of Ukraine, at least 18 privately held and publicly traded US technology companies offered services, often pro bono, in support of Kyiv’s war effort (Bresnick et al., Reference Bresnick, Luong and Curlee2024). “These companies have provided support for communications and intelligence, reconnaissance, and surveillance (ISR) functions, delivered information that helped inform target selection, as well as protected Ukraine’s critical infrastructure from cyberattack” (Bresnick et al., Reference Bresnick, Luong and Curlee2024, p. 6). In July 2024, the BBC reported that Microsoft shut down email and Skype accounts of Palestinians living outside Palestine and who tried to contact their families in Gaza (Shalaby & Tidy, Reference Shalaby and Tidy2024). In May 2025, after the US placed sanctions on the International Criminal Court’s chief prosecutor, Karim Khan, Microsoft blocked his official email account (Quell, Reference Quell2025). This has caused significant concern for policymakers around the world, including in Europe and Australia, due to the reliance on technology providers and their ability to influence or coerce nation state access to technology and limit decision-making scope.
Throughout Russia’s invasion of Ukraine, Starlink provided online connections for civilian and military coordination (Lerman & Zakrzewski, Reference Lerman and Zakrzewski2022). The system – initially co-funded by Western governments (predominantly for the service’s terminals) and SpaceX (for the connection) (Lerman & Zakrzewski, Reference Lerman and Zakrzewski2022; Metz, Reference Metz2022) – was delivered at the start of the war and continued throughout the conflict, although not without disruptions at key moments and significant concern about continuity (Giles, Reference Giles2023). The high-profile and public reliance of the Ukrainian military and civilian infrastructure on private company services and essential digital infrastructure has exposed new vulnerabilities. SpaceX signed an ongoing contract with the Pentagon to supply Starlink in Ukraine (Capaccio, Reference Capaccio2023) in a more traditional arrangement.
Microsoft, Amazon and Google have also provided a range of services to Ukraine, including cybersecurity, the migration of critical government data to the cloud and keeping Ukraine connected during the Russian invasion (Bergengruen, Reference Bergengruen2024). Ukraine’s use of tools provided by companies like Palantir and Clearview also raises complicated questions about when and how invasive technology should be used in wartime, as well as how far privacy rights should extend (Bergengruen, Reference Bergengruen2024). Horowitz (Reference Horowitz2023, Reference Horowitz2024) provides an excellent outline of some of the legal considerations related to technology companies providing digital services in situations of armed conflict, and particularly the relationship between critical infrastructure companies, cyber offensives and defensive action that all blend into the traditional role of the state.
The prominent public role of Elon Musk and Starlink technology provides insight into the significant ability of individuals and companies to impact and make national security decisions outside of the traditional national security apparatus of government. As Giles notes, “companies are providing capabilities that are vital to Ukraine’s national survival because they choose to, not because they are beholden to any of the states involved in the conflict” (Giles, Reference Giles2023). This is clearly a concern to the US establishment: a recent Georgetown Center for Security and Emerging Technology (CSET) report outlines US technology companies’ financial and operational entanglements in China and argues that such entanglements complicate their decision making in a potential Taiwan contingency (Bresnick et al., Reference Bresnick, Luong and Curlee2024).
The technology landscape is evolving much more quickly than academic literature can capture. Concentrated power across the architectures of AI is already playing out in different examples in conflict and policymaking globally, including Ukraine and Russia as well as Israel and Gaza. It is also impacting international institutions that adjudicate on global norms related to war, such as the International Criminal Court. Leading AI technology companies and their leaders are all-new actors in governing and decision making, including now in matters of war. They are unelected, largely unaccountable, and wield immense power to covertly influence, coerce or even compel states to act in accordance with their wishes. They are extremely powerful in each of the architectures of AI, and governments are deeply dependent on them. Together, the concentration of power across the architectures of AI and diffused national security decision making decrease the scope of state decision making, even in relation to the resort to force.
6. AI’s role in public opinion: curation, influence and interference
The role of AI is considered here in the context of shaping public opinion on war. AI is increasingly used by online news providers to curate and prioritise information and news. The social media platforms used most in Australia (Facebook, Instagram, WhatsApp and YouTube) as well as search services, are owned and operated by the large technology companies (Meta and Google) that have concentrated power across the architectures of AI. The AI algorithms and internal policies and processes behind social media – and in particular content discovery, curation, and moderation – shape public opinion. The applications and platforms we use “form a significant part of our information environment and have the capacity to shape us, including what we see, the choices we are presented with, what we think others believe, and ultimately how we might view the world” (Hammond-Errey, Reference Hammond-Errey2024b).
The role of public support in winning wars and the notion that wars are fought in the mind is central to military doctrine in the West (Libicki, Reference Libicki1995; Herbert & Kerr, Reference Herbert and Kerr2021). In contemporary Western scholarship on the topic of adversary threats, the evolution of the use of public information campaigns in warfare has given rise to notions of hybrid war and threats (Hoffman, Reference Hoffman2007), asymmetric war (Thornton, Reference Thornton2015) and grey-zone warfare (Hoffman, Reference Hoffman2016), as well as several related (and competing) constructions to explain these phenomena (Galeotti, Reference Galeotti2016; Selhorst, Reference Selhorst2016). From a Western perspective, information forms a part of a broader warfare strategy used mostly by nation states. States use information and communication technology in pursuit of a competitive advantage over an opponent. This is usually combined with other military activities and directed by military strategy. In other words, states use the power of information and, increasingly, of public information to influence and achieve strategic results (Hammond-Errey, Reference Hammond-Errey2017). Similarly, Janis Berzins (Reference Berzins2014) argues that the Russian view of modern warfare is based on the idea that the main battlespace is in the mind. Information is the principal tool in this fight, creating a version of reality that suits political and military purposes at all levels of warfare (Berzins, Reference Berzins2014, p. 5).
The interplay between public opinion and resort-to-force decision making in democracies has been the subject of long-standing controversy (Tomz, Weeks & Yarhi-Milo, Reference Tomz, Weeks and Yarhi-Milo2020). Nevertheless, public opinion is inextricably linked with war. “Because battle is a matter of politics, as well as combat, the battlefield is not the only place where information is important. Information helps to shape opinion: let’s go to war; let’s not go to war; let’s escalate and win an existing war; let’s disengage from that war” (Seib, Reference Seib2021, pp. 185–186). Arguably, contemporary wars are more about control of the population and the political decision-making process than about control over territory (Nissen, Reference Nissen2015). Thus, information dominance, both overt (public disinformation) and covert (hidden cyber and infrastructure attacks, public opinion shaping), will almost certainly continue to evolve as an issue of national security in unprecedented ways (Hammond-Errey, Reference Hammond-Errey2016).
The digital landscape we rely on for economic growth, political stability and social interaction has created an ecosystem that is vulnerable to information influence and interference at individual and national levels (Hammond-Errey, Reference Hammond-Errey2022). People live a significant portion of their lives online and increasingly across the architectures of AI. This has the capacity to shape an individual, including what they see, what their options are, what choices they have, what they think others believe, and ultimately how they view the world (Hammond-Errey, Reference Hammond-Errey2024). Fragmented media landscapes and precise targeting lead to an increase in political and social polarisation (Prummer, Reference Prummer2020). An evolving technology landscape, AI, and algorithmic content curation, as well as increased foreign interference efforts, worsen the outlook.
Despite their ubiquity, understanding precisely how information and social media platforms impact public opinion remains difficult (Bradshaw & Howard, Reference Bradshaw and Howard2017), including in resort-to-force decision making. It is, however, clear that the strategies and techniques used by malign actors have an impact, and that their activities violate the norms of democratic practice (Bradshaw & Howard, Reference Bradshaw and Howard2018). “The computational architecture underpinning major social media platforms has so many parts as to defy understanding by any individual inquirer”(Lazar, Reference Lazar, Sobel and Wall2024, nfd). Large technology companies create, use and control vast troves of personal data (Birch & Bronson, Reference Birch and Bronson2022). They have billions of users (Kemp, Reference Kemp2023) from whom they collect data and over whom they also have varying degrees of influence (Davidson, Reference Davidson2017; Griffin, Reference Griffin2019; Zuboff, Reference Zuboff2019). They deploy complex systems involving layers of human content moderation, platform design and amplification algorithms that integrate standard programming, machine learning (ML) models, user interface design and vast amounts of data, from both on-platform behaviour and tracked behaviour online (Lazar, Reference Lazar2022).
There is an urgent research need to explore how influence and interference work at the cognitive level on contemporary information platforms and how big data and AI can influence behaviour (Hammond-Errey, Reference Hammond-Errey2024; Grahn, Häkkinen, & Taipalus, Reference Grahn, Häkkinen, Taipalus and Lehto2024). Cognitive processes and vulnerabilities such as perception and attention, emotion, decision making, memory metacognition, trust and cognitive bias can be exploited in the digital ecosystem (Grahn & Pamment, Reference Grahn and Pamment2024). Since the broader concept of influence refers to the shaping of people’s thoughts, beliefs and perceptions of the world, it is important to gain a greater understanding of how mental and psychological processes contribute to individuals’ susceptibility to such influence (Grahn & Taipalus, Reference Grahn and Taipalus2025). This includes, for instance, understanding how people process, store and retrieve information; how cognitive biases affect decision making; and how emotion impacts judgment. By advancing such understanding, cognitive science can contribute to a deeper comprehension of how disinformation influences human behaviour (Grahn & Pamment, Reference Grahn and Pamment2024).
Foreign influence and interference psychologically target individuals to influence or manipulate and alter perceptions of events by distorting the information environment in ways that benefit the sender of the information (Starbird, Arif & Wilson, Reference Starbird, Arif and Wilson2019; Pamment & Isaksson, Reference Pamment and Isaksson2024). Such actions work by reshaping individuals’ cognitions, or the mental frameworks people use to understand, interpret and respond to the world around them (Grahn & Pamment, Reference Grahn and Pamment2024). The technology-fuelled digital landscape “allows … entities to utilise cyberspace to conduct operations that are tactically and strategically similar and also lowers the costs of collaboration between foreign and domestic malign entities” (Dowling, Reference Dowling2021). Conflicts are increasingly waged in digital environments and within the cognitive realm of individuals’ minds (Bērzin̦š, Reference Bērziņš2019; Tashev, Purcell & McLaughlin, Reference Tashev, Purcell and McLaughlin2019).
The use of AI in social media is changing the political equation of public support and accumulating power in the technology companies that can, often opaquely, wield that power in their provision of social media platforms. The focus of AI in social media is narrowed in this context to include the algorithms used for content discovery, curation, and moderation.Footnote 12 However, it is situated as part of the broader “information environment,”Footnote 13 which comprises informational, cognitive and physical dimensions (GAO [US Government of Accountability Office], 2022) in an attempt to understand “how human beings use information to influence the direction and outcome of competition and conflict” (Ehlers & Blannin, Reference Ehlers and Blannin2020). Using propaganda and information to influence the cognitive capacities of individuals is used by a range of hostile actors for warfare, radicalisation, and disinformation (Claverie, Reference Claverie2025). Cognitive processes such as attention, memory and emotional responses are often exploited by influence operations, and understanding these processes offers insight into points of vulnerability (Grahn & Pamment, Reference Grahn and Pamment2024).
The information environment provides a useful framework for understanding information and technology challenges in three prongs: information or content – the information itself; digital landscape or infrastructure – the platforms and systems of creation, distribution and use; and cognitive or human resilience – our own engagement with information and the social context within which it’s embedded (Hammond-Errey, Reference Hammond-Errey2022). The information environment “encompasses everything from human influence right through to information warfare, from peacetime to acute, large-scale conflict” (Hammond-Errey, Reference Hammond-Errey2022, np). Many aspects of the information environment impact deliberations on resort to force. The concept of the information environment is vital because it is a reminder that to influence public opinion, content alone is not enough. To be effective, attempts to influence public opinion must have some cognitive impact or influence. Taken in conjunction with the concentration of power in the architectures of AI and the capacity to impact real-world decisions and public opinion at once, this represents unprecedented change.
The capacity of social media and news media companies to influence our information environment is undeniable. In the context of content moderation, Donovan (Reference Donovan2019) writes, “At every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how.” While most of the conversation about AI and mis- and disinformation revolves around content moderation, its impact is limited until it is distributed (Hammond-Errey, Reference Hammond-Errey2024c). Thus, sharing information is an essential consideration of AI and public opinion. This explains why the lens of the tech stack is so valuable. Looking at the infrastructure dimension of the information environment highlights how the internet infrastructure globally has become an “emerging terrain of disinformation” (Bradshaw & Denardis, Reference Bradshaw and Denardis2022). The technical structure of social media platforms leaves users vulnerable to information influence and interference (Hammond-Errey, Reference Hammond-Errey2019), including from foreign actors and algorithmic processes. Social media can be used to erode trust in political institutions; to spread harmful disinformation; and to incite hate, polarisation, and anti-democratic sentiment (Khalil, Reference Khalil2024).
Most Australians are using social media and increasingly use it to access their news.Footnote 14 The market share of services used by adult users (age 16–64) is increasingly narrow in sectors such as search, where Google accounts for 94.5 percent of the share (Digital 2024, 2024, p.41), and in social media, where Meta dominates (Facebook, WhatsApp, and Instagram).Footnote 15 This means that their ability to shape audiences’ thinking and influence behaviour is high, as are their data collection capabilities. Social media saturation is very high, with almost all Australians using social media services.Footnote 16
Social media platforms use a variety of techniques, mostly focused on increasing engagement, to exert influence over their users. These include algorithmic discovery and recommender systems, which increase the influence and editorial power applications and platforms have over user experiences, including the content they see. Some social media platforms, such as TikTok, weigh algorithms more heavily towards interests (and interest categories) rather than social connections, like followers and friends, as Facebook does (Hammond-Errey, Reference Hammond-Errey2024b). The weighting of algorithms is important as this significantly impacts user experience, including content selection and information “bubbles,” however, they are largely opaque to the user and constantly evolving.
Given increased political polarisation as well as social media influence and interference – including in the context of the Israel-Hamas/Gaza conflict – the ability to obtain and maintain public support for military action includes a serious social media dimension. Given the opacity of social media algorithms, it is extremely difficult to conduct research. However, an example of public opinion regarding who shot down MH17 is indicative of the power of censored media over public opinion. In 2014 in Russia, where information is heavily mediated by the state, 97 percent of Russians did not believe Russian separatists were responsible for shooting down MH17, almost the inverse of public opinion globally (Luhn, Reference Luhn2014). Social media platforms are actively shaping public opinion about conflict and influencing what users can see, shaping their perceptions about whether governments can and should resort to force. In short, technology companies are moderating, shaping, influencing and ultimately gaining power and control over public opinion.
7. Discussion and implications for policymakers
The thesis of this article is that the role of AI in concentrating power, diffusing national security decision making, and shaping public opinion alters the political calculus and practical realities of going to war. There are key policy implications and considerations that follow from this, primarily focused on the imperative to improve understanding of the architectures of AI, the technology stack, and how they impact deliberations on the use of force.
Policymakers need an increased understanding of the tech stack as well as the inherent interdependencies and vulnerabilities in the technology ecosystem and the fragility of the architectures of AI. Based on this analysis, I recommend that technology literacy training programmes be designed specifically for politicians as well as policy, intelligence and military leaders. Moreover, this should be delivered by independent organisations. I also recommend that government and Defence invest in mapping the architectures of digital infrastructure and AI capabilities within their borders and regions. Moreover, governments should invest in research to develop a comprehensive picture of the physical and digital architectures of AI, including critical dependencies and vulnerabilities for Australia, our allies and the region, and inform how access and power are distributed. Governments should also fund the forecasting of future technology dependencies, specifically for their national government, defence forces and intelligence functions, in the next 2-5 years but also in longer term investments.
There is an urgent need to significantly increase awareness of government reliance on the architectures of AI, especially for critical government functions and functions of war. I recommend that government, military and intelligence leadership be educated about these through research, establishment of advisory boards, forums and professional training. It is also recommended that intelligence agencies increase intelligence collection on critical and potential capabilities and that independent technical advisors are appointed, funded and security cleared to support intelligence agencies in this task. Without this, future government decision making will likely be constrained. Governments need to increase national investment to build public sovereign capabilities where needed. I also recommend governments invest in understanding now in further understanding biology and technology intersections to create safe and secure cyber physical systems over the horizon. Developing cognitive resilience in Australian leaders and people will be vital for national survival.
An increase in emerging technology literacy is urgently needed across governments. I recommend that governments immediately prioritise increasing their depth and scope of understanding of technology ecosystems, policies, and impacts on governing, security, warfare and public safety. This increase in awareness must include technology developments, the role of technology policy in Australia and globally, and the role that multi-lateral technology forums play in affecting AI capabilities and dependencies across the whole of government. This must be delivered by organisations that reflect technical and security expertise, are independent and include a technology ecosystem perspective as well as consideration of social harms and national security threats. This must occur immediately, or the window for shaping this ecosystem will be lost.
More research on social media and its impact on human cognition as well as the functions of government is urgently needed. Governments need to invest heavily in research, including research on democracy, the use of the information environment to influence public decisions on resort to force, and on foreign interference. I recommend that governments ensure data is available from social media platforms for this research, either through negotiation or regulation. Moreover, I recommend that funding for this research be made available to a wide range of researchers, civil society groups and academic institutions and consortiums to ensure it is representative of state demographics. There is an urgent research need to explore how foreign influence, and interference works at the cognitive level on contemporary information platforms and how AI can be used to understand and influence people’s behaviour. I recommend governments invest immediately in research that reveals how cognitive processes and vulnerabilities (such as perception and attention, emotion, decision making, memory metacognition and trust) as well as cognitive biases can be exploited by malign actors in the digital ecosystem. Governments must also be willing to ensure that algorithmic processes as well as internal policies and procedures within social media service providers contribute to, or are at least consistent with the democratic principles of the country of sale. This is essential and requires immediate action.
8. Conclusion
This article has demonstrated how the architectures of AI – encompassing data, connectivity, energy, compute capacity and workforce – are consequential for the independent and sovereign decision making of governments on matters of war and peace. By concentrating power and diffusing national security decision making, and by shaping public opinion through AI-driven platforms, technology companies have emerged as national security actors. The scope and concentration of power within these firms mean that most nation states possess, or will soon possess, less autonomy in resort-to-force decisions, whether through direct coercion, indirect influence or strategic incentives.
The power asymmetries between governments and technology companies continue to grow, challenging a foundational principle of state sovereignty: the independent authority to declare and wage war. If nation states are to retain their sovereign independence in this domain, significant political will and urgent action are required. Policymakers must develop a deeper understanding of the interdependencies and vulnerabilities inherent in the technology ecosystem and devise strategies to contain the influence of private companies.
Competing interests
The author declares none.
Funding statement
The author declares none.
Dr Miah Hammond-Errey is the founding CEO of Strat Futures Pty Limited, host of the Technology & Security Podcast and an adjunct Associate Professor at Deakin University. She leads the development of a platform to improve cognitive readiness and resilience in leaders who make high stakes decisions. Dr Miah Hammond-Errey spent 18 years leading federal government analysis, operational and liaison activities in Australia, Europe and Asia. She has been awarded medals, citations and awards for operational service, leadership and excellence. Her book, Big Data, Emerging Technologies and Intelligence: National Security Disrupted (2024) is based on her PhD. She developed the Information Influence and Interference (I3) Framework, at ANU, to counter Russian disinformation. Previously, she established and led think tank programmes on emerging technologies and information operations. Dr Miah Hammond-Errey publishes and presents at the intersection of technology and security and teaches postgraduate students cyber security, emerging technologies and national security.