Hostname: page-component-68c7f8b79f-p5c6v Total loading time: 0 Render date: 2026-01-16T09:42:37.063Z Has data issue: false hasContentIssue false

Silicon Sovereigns: Artificial Intelligence, International Law, and the Tech-Industrial Complex

Review products

AI Needs You: How We Can Change AI’s Future and Save Our Own. By Verity Harding. Princeton, NJ: Princeton University Press, 2024. Pp. x, 274. Index.

Technological Internationalism and World Order: Aviation, Atomic Energy, and the Search for International Peace, 1920–1950. By Waqar H.Zaidi. Cambridge, UK: Cambridge University Press, 2023. Pp. xiv, 300. Index.

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. By Daron Acemoglu & Simon Johnson. New York: Public Affairs, 2023. Pp. xv, 546. Index.

Published online by Cambridge University Press:  16 January 2026

Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Review Essay
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of American Society of International Law

Debates over the governance of artificial intelligence (AI) tend to assume that it will be important and transformative across many areas of human endeavor.Footnote 1 Though filtering out the hype can be challenging,Footnote 2 the last round of Nobel Prizes supports at least some of these claims: the prize in physics was awarded for foundational work on machine learning, while the one for chemistry went to researchers using AI to model protein folding.Footnote 3 The focus in such debates is typically over how to reap the benefits of AI while minimizing or mitigating known and unknown risks.Footnote 4 Yet, the question of how those benefits and risks will be distributed—who will win and who will lose—is less commonly articulated.

Techno-utopians enthuse that everyone will win: the pie will be bigger; the rising tide will lift all boats.Footnote 5 Concerns about inequality or the environmental impact of AIFootnote 6 are batted aside with the promise that AI itself will solve such problems.Footnote 7 Others, including a surprising fraction of those developing AI systems themselves, warn of darker, dystopian futures in which AI turns on humanity, either through misalignment of objectives or the emergence of a superintelligence that regards its creators in the way that we might regard lesser creatures such as dogs—or ants.Footnote 8 Everyone loses.

Between the extremes are those trying to think through where the gains and losses of AI will fall. In realist circles, it has become common to speak of AI in the language of an arms race, a comfortingly familiar frame that pits the West against a rising China.Footnote 9 Anu Bradford’s Digital Empires, for example, posits a battle for regulatory pre-eminence between the market-based U.S. model, a rights-based approach favored in the European Union, and China’s state-centric regime. At stake, she argues, is nothing less than the “soul of the digital economy.”Footnote 10 Her book captures the geopolitical moment, in particular the Sino-U.S. tensions playing out over access to high-performance computing power.Footnote 11 Arch-realist Henry Kissinger—unable to step away from the game of realpolitik even as he approached a hundred years of age—spent his final months writing on AI and warning of its geopolitical implications.Footnote 12

An alternative framing adopts a North-South axis, noting the 750 million people without stable electricity and the more than two billion unconnected to the internet.Footnote 13 A report by the United Nations AI Advisory Body examined some of the most prominent efforts at international governance of AI and found that seven countries (the wealthy, industrialized members of the G7) are party to all of them, while more than a hundred other states are party to none.Footnote 14 For all the worry about misuse of AI, many in developing countries are more concerned about missed uses and being left behind.

Yet, the most important divide may not be East-West or North-South but public-private. For AI is shifting economic and, increasingly, political power away from governments.Footnote 15 That is most obvious in the deployment of ever more powerful products with minimal regulatory oversight. But it is also true at the level of fundamental research. Machine learning models that power systems like ChatGPT began in publicly funded universities; by 2022, of the dozens of significant models tracked by Stanford’s AI index, all but three were released by industry.Footnote 16 Outside of the European Union, states have been wary of introducing new laws to regulate AI for fear of losing a competitive advantage or driving innovation elsewhere.Footnote 17 Efforts to regulate technology companies by applying existing laws—prominently including antitrust and intellectual property—have seen government lawyers hopelessly outgunned by their corporate counterparts.Footnote 18

All of this poses a challenge to public lawyers generally and international lawyers in particular. In structural terms, if the twentieth century saw a turn from bilateralism to multilateralism, with the emergence of truly international organizations, the twenty-first century may be witnessing a fracturing of those structures—and indeed a decline in the preeminence of states as the primary political vehicle on the global stage. Though corporatist critiques of the modern political order are hardly new,Footnote 19 the nature and scale of the power wielded by today’s tech giants rivals the role occupied by the East India Company in the early nineteenth century, when it controlled half of global trade and had its own army.Footnote 20 Today’s tech behemoths may lack that measure of economic or military power, but their global cultural and political influence is arguably greater.Footnote 21

Another dimension of the governance challenge posed by AI is time. The speed with which innovations and new capacities are now launched recalls the Red Queen’s advice to Alice in Wonderland: just to stay in one place you have to run as fast as you can; if you want to get somewhere else, you must run at least twice as fast as that.Footnote 22 Keeping up with the pace of change is reflected in the academic work of computer scientists, whose stock in trade is not books or even journal articles, but conference proceedings.Footnote 23 A relentless presentism can lead observers to overestimate the impact of a new technology in the short-term even as they underestimate it in the long-term, a phenomenon sometimes termed “Amara’s law.”Footnote 24

Such presentism and the efforts to forecast possible futures can blind us to the realization that it may be more profitable not to look forward but to look back. This is not the first time humanity has confronted a technology with the potential for good or ill, giving rise to a clash between public and private interests—nor will it be the last. In distinct ways, each of the three books considered in this essay encourages a historicist turn, situating AI and related technologies in their historical moment and seeking lessons from past technological revolutions that similarly challenged norms, even as they revealed or exacerbated inequalities through the distribution of benefits and risks.

Verity Harding’s AI Needs You draws on three late-twentieth-century examples: the space race, in vitro fertilization, and the internet.Footnote 25 Waqar Zaidi looks back further in that century to the emergence of aviation and atomic energy, along with the rise of what he terms “technological internationalism.”Footnote 26 Daron Acemoglu and Simon Johnson propose a more ambitious sweep of a thousand years of “power and progress,” although they mostly focus on the industrial revolution onward.Footnote 27

None of these books explicitly foregrounds international law or institutions as the mechanism that should play a leading role in regulating emerging technologies like AI. Nevertheless, each has much to say about the possibilities and limitations of global efforts to govern them—if only through the frustration the authors variously express at the inadequacy of market- and state-based efforts. Indeed, if there is a throughline that resonates with each work, it is the catastrophic mismatch as between those who are developing AI and those who are using it. That inequality of arms is, in turn, exacerbated by the incentives that encourage technology companies to “move fast and break things”Footnote 28 while governments are left flatfooted or left behind.

I. Artificial Intelligence Needs Who?

Harding’s résumé gives her a unique perspective on the various camps involved in technology policy, including stints as global head of policy at Google DeepMind and as an adviser to Nick Clegg, when he was deputy prime minister of Britain. She now directs the AI & Geopolitics Project at Cambridge University’s Bennett Institute for Public Policy. Her stated objective in AI Needs You is to seek lessons from past technological transformations, though she sets a somewhat arbitrary temporal cut-off at World War II and largely limits herself geographically to the United States and Britain. Indeed, the “you” of the title is often explicitly directed at those in Western democracies.Footnote 29

Nonetheless, her book is an urgent challenge to the passivity with which many people and governments appear to view the development of AI today. Like it or not, we are already shaping this emergent technology. Well-known problems, such as the potential for discrimination, are not a reflection of any inherent bias on the part of our silicon creations. On the contrary, such bias is typically a faithful reflection of the data that we have, directly or indirectly, fed into these systems. AI “is not human,” Harding observes. “But it is us”(Harding, p. 12).Footnote 30

Harding chooses three technological inflection points to argue that democratic societies can enable “a myriad of citizens” (Harding, p. 26) to take an active role in shaping the future of AI.Footnote 31 The first is the Space Race—the Cold War rivalry between the United States and the Soviet Union as each sought dominance in orbit and beyond. Technological advances in spaceflight had clear implications for security, including the ability to launch missiles and position satellites for reconnaissance, but efforts to launch uncrewed and then crewed vessels beyond our atmosphere captured the wider public imagination.Footnote 32 Harding argues that U.S. Presidents Eisenhower, Kennedy, and Johnson showed a willingness to balance national defense and the “greater ideals of international cooperation and pacificism” (Harding, p. 26), culminating in the Outer Space Treaty of 1967.Footnote 33 AI, she argues—exaggerating, to be sure—“like space in the middle of the twentieth century, is a new frontier and a blank sheet for global norms” (Harding, p. 65).Footnote 34

Her second analogy is in vitro fertilization (IVF). Following the birth of Louise Joy Brown in England in 1978, the first person conceived using the new technique, a biotechnology revolution was shaped by the “careful setting of boundaries and pursuit of consensus” (Harding, p. 26). Of the examples considered in her book, Harding finds the greatest similarity between handwringing over AI today and the biotech debates of the 1970s and 1980s, in particular fears “about corporate influence, about an unknown future, about what it means to be a human” (Harding, p. 76). Yet the goal of fertility treatment—enabling otherwise infertile families to have a child—is unusually clear and the red lines beyond it more easily debated and drawn. In the period Harding considers, these included limits on human embryo research, with provision for licensing and a “fourteen-day rule” that banned experimentation on embryos more than two weeks after fertilization (Harding, p. 95). Britain’s IVF legislation may have been progressive, but Harding over-eggs it somewhat as being “the most innovative and world-leading scientific regulation of the century” (Harding, p. 92).

The third comparison is the early development of what became the internet, which she describes as “a story of convergence: between baby boomers who grew up believing the promise of 1960s liberalism and those who felt betrayed by it; between young modernizing progressive politicians and the business titans of the new Gilded Age; and between the newly developed ‘internet community’ and those tasked with regulating it” (Harding, p. 126). She notes in particular the manner in which the Internet Corporation for Assigned Names and Numbers (ICANN), established a free and open global network through multistakeholder and multinational cooperation, epitomized by “unglamorous efforts by normal people in meeting rooms trying to make things work” (Harding, p. 27).

These are curious examples on which to rest an argument for wider and more participatory public involvement in charting the path of new technologies. Space exploration is today driven precisely by private interests, with the U.S. space program dominated by a single individual—Elon Musk—who also happens to be a leading figure in AI.Footnote 35 Reproductive rights have become some of the most politically divisive issues in the United States.Footnote 36 As for governance of the internet, decentralized control might preserve freedom and openness while encouraging innovation, but that openness has also allowed the proliferation of tools that enhance surveillance, monetize human attention, and replace human labor.Footnote 37 (Better examples of control might have included the limits placed on weapons of mass destruction, which have largely remained unused and in state hands;Footnote 38 positive examples of deployment of technology for widespread benefit might have included public utilities, from water and sanitation services to the electricity grid.Footnote 39)

Nonetheless, Harding’s central message is that the goal of developing AI that recognizes our weaknesses, aligns with our strengths, and serves the public good requires greater participation. Her book is most compelling in its argument that the future cannot be left to the innovators and disruptors alone. Unfortunately, she concedes (drawing on personal experience), compromise, humility, and “acceptance that your world view might not be correct” are not qualities found in abundance in the tech industry (Harding, p. 120).

The absence of those qualities amplified the breakdown of trust that she documents, starting with Snowden revelations around 2013 that showed even democracies were abusing the surveillance potential of the digital world. That, in turn, was compounded by the realization that the economic model of this new world relies on corporations harvesting vast amounts of data also: “Artificial Intelligence is being built in an environment of shattered trust—between citizens and their governments, between governments and powerful corporations, and between political ideologies of liberal democracy and authoritarianism” (Harding, pp. 210–11).Footnote 40

Despite the fact that the title of her book and its closing words are a call to action to the global (or at least Western) “you,” there is also a strong thread of the “great man of history” to Harding’s account. In her telling, the success of the Space Race depended on exceptional leadership by three American presidents. IVF’s threading of a needle of compromise was due in significant part to personal qualities of Baroness Warnock and Prime Minister Margaret Thatcher. And much of the discussion of the early internet focuses on the pivotal role played by the junior senator from Tennessee, Al Gore.

Indeed, one of Harding’s first lessons is the importance of “powerful political leadership to exert influence over the future direction of technology, and humanity” (Harding, pp. 65–66). Today, “while we have the technological power to lead, there is neither the political will nor capacity to do so in a way that could generate benefits for humanity worldwide” (Harding, p. 38).

Unfortunately, Donald Trump’s second presidential term offers little hope of addressing such an abdication of leadership—on the contrary, he appears to have doubled down on it with his elevation of tech titans above his own cabinet at his inauguration and offering Elon Musk office space adjacent to the White HouseFootnote 41—even if that tenancy was predictably brief.Footnote 42 One of Trump’s first acts as president was to roll back even the thin gruel of his predecessor’s executive order that had sought to bypass legislative deadlock and address at least a few of the risks posed by AI through administrative action.Footnote 43

In response to the vacuum, some saw opportunity. Enthusiasts in the EU had long hoped that its first mover status on general AI regulation would give rise to a “Brussels effect” analogous to the impact the EU has had on data protection worldwide.Footnote 44 China has also experimented with domestic regulation and flirted with a thin layer of coordination at the global level.Footnote 45 Yet the greatest consequence has been to slow down any moves towards serious governance of AI and further elevate the status and leverage of technology companies. One indication was the rise and fall of AI safety, from the Summit at Bletchley Park in Britain in November 2023, to a second meeting of what appeared to be a growing AI Safety network in Seoul the following May. Yoshua Bengio led a team that produced the International AI Safety Report in January 2025.Footnote 46 By the time the meeting moved to Paris in February 2025, however, it had been renamed the “AI Action Summit” and at times resembled less an intergovernmental meeting to rein in the private sector than it did a trade fair.Footnote 47

In a section toward the end of her book, aptly titled “The Red, White, and Blue Elephant in the Room” (Harding, p. 231), Harding argues that, half a century after the Outer Space Treaty was signed, the United States has another chance to lead. She invokes the advice Warren Buffett famously offered the musician Bono, who was seeking a strategy to convince the United States to support funding to fight AIDS in Africa: “Don’t appeal to the conscience of America, appeal to its greatness.”Footnote 48 For the time being, however, with regard to technology policy at least, the United States appears content not to lead but to follow.

II. The Rise and Fall of Technological Internationalism

The isolationism of Donald Trump is hardly unique in U.S. politics. George Washington himself famously abjured foreign entanglements in his farewell address, though subsequent presidents quickly discovered that was easier said than done.Footnote 49 Indeed, his successors were among the great architects of the modern international order—Woodrow Wilson and the League of Nations, Franklin Roosevelt and the UN.Footnote 50 Waqar Zaidi taps into this theme in his chronicle of efforts to internationalize two technologies that came to dominate twentieth century warfare: aviation and atomic energy (Zaidi, p. 26). Each case saw efforts, notably on the part of U.S. and British internationalists, to take them out of government hands and put them under the control of international organizations. Based on a doctoral thesis completed at Imperial College London under the supervision of historian David Edgerton, Zaidi keeps his focus narrow. Indeed, the term “Artificial Intelligence” appears but once on the very last page of the book—a single sentence drily noting that, like aviation and atomic energy, AI today offers economic growth, arms races, “and possibly the extinction of the human race” (Zaidi, p. 247).

In the eyes of the activists of their time, aviation and atomic energy were also potential vehicles for peace.Footnote 51 Such awesome power both demanded and made possible the creation of a new liberal world order. In the case of air power, civil aviation could bind the globe through trade and communication, while an international air force oversaw collective security (Zaidi, pp. 107–11); after the devastation of World War II, international control of nuclear weapons could prevent catastrophic conflict, strengthening the fledgling United Nations (Zaidi, pp. 204–09).

Both efforts failed. Yet their failures illuminate the politics of the time and the social currents that supported them, echoes of which lived on in the more modest oversight of civil aviation and atomic energy that did manifest, as well as in techno-globalist rhetoric that continues today.

In retrospect, those earlier efforts were part of a liberal internationalist arc that saw its beginnings in the nineteenth century (Zaidi, pp. 20–23).Footnote 52 In form, it saw the shift from bilateralism to multilateralism, notably including the first international organizations recognizable as such.Footnote 53 In substance, it encompassed efforts to restrain the use of force as well as to position international law as the conscience of the “civilized” world.Footnote 54 There is a darker aspect to this history, of course. Much as Martti Koskenniemi traced the throughline from colonialism to modern human rights,Footnote 55 proposals to use air power to maintain global order applied lessons learned in the governance of far-flung territories of empire.Footnote 56 The civilizing mission has long depended on methods that are far from civil.

The destructive power of these new technologies, along with enormous potential benefits—including, in the case of nuclear power, the prospect of electricity “too cheap to meter”Footnote 57—saw the emergence of the technological internationalism of Zaidi’s title. He tracks the emergence of a loose consensus that the new “machine age” required international governance through technical expertise.

Many early proposals for notionally global security forces were in fact premised on specific countries’ armed services—notably one’s own—operating under an international flag (Zaidi, pp. 59–60). The Economist of the 1930s was wary of French ambitions in particular, but concluded that the case for internationalization was “overwhelming” as “[f]lying is supra-national in its very nature.”Footnote 58 Discussion about the internationalization of aviation became more mainstream in the course of that decade, including when popularized in H.G. Wells’s book and later film The Shape of Things to Come.Footnote 59

The horrors of World War II led to the zenith of such aspirations, with plans for a post-war order including widespread discussion of an internationalized air force. Indeed, the 1944 meeting of the American Society of International Law featured detailed discussion of the merits of such a force, which could provide “an effective spearpoint for police action capable of immediate use by the international council at any point where aggression might occur or be threatened.”Footnote 60 By the Dumbarton Oaks Conference some months later, however, enthusiasm for a truly international or exclusively aerial combat force had waned. The Soviet delegation raised the possibility of such an entity, but the compromise was that national contingents would be made available to the proposed Security Council “on its call and in accordance with a special agreement or agreements.”Footnote 61 This was later formalized as Article 43 of the UN Charter, with a commitment that the specifics of the “numbers and types of forces, their degree of readiness and general location” would be negotiated “as soon as possible.”Footnote 62 Despite several enforcement actions over the succeeding decades, not a single agreement as envisaged by the Charter has yet been concluded.Footnote 63

The longevity of the technological internationalist view, at least among a certain class of intellectuals, can be seen in proposals that the latest technology offering tremendous benefits alongside real risks—AI—should also be governed in some measure by a cadre of global experts. The analogy with nuclear energy and the atomic bomb in particular is well-worn, having been embraced by academics,Footnote 64 leaders of technology companies,Footnote 65 and the secretary-general of the United Nations himself.Footnote 66

The limits of that analogy are obvious. Nuclear energy comprises a well-defined set of techniques using specific materials that are unevenly distributed around the world. AI, by contrast, is an amorphous term whose applications are extremely wide and difficult to contain. Atomic bombs are expensive to build and difficult to hide; weaponized AI promises to be neither.

Still larger problems may be political and structural. Politically, there is no appetite for anything remotely as elaborate as the International Atomic Energy Agency (IAEA) for AI.Footnote 67 Despite early enthusiasm on the part of the secretary-general,Footnote 68 the Global Digital Compact, adopted by member states in late 2024, did not even mention an agency, calling instead for the UN secretary-general to submit a proposal for an office that would draw upon existing resources to “facilitate system-wide coordination” in relation to AI.Footnote 69

Structurally, international organizations like the UN are often ill-suited to—and often vehemently opposed to—the direct participation of private sector actors. In March 2024, for example, the General Assembly adopted its first ever resolution on regulating AI. The non-binding document calls on member states “and, where applicable, other stakeholders” not to use AI systems that pose undue risks to the enjoyment of human rights.Footnote 70 A few paragraphs later, the Assembly “encourages the private sector to adhere to applicable international and domestic laws.”Footnote 71

Once more, we confront a mismatch between the interest in regulating and the political mechanisms that might enable it. Zaidi concludes his book with the observation that technology through the ages has tapped into our existential hopes and fears, often emerging as “carriers of our dreams and nightmares” (Zaidi, p. 247). With regard to AI, at least, waking up to address this latest challenge requires more than waiting for the United States to assert its role as a shining city on a hill, or hoping that the United Nations will somehow save humanity from its silicon creations.Footnote 72

III. Reining in the Digital Robber Barons

Where Harding and Zaidi limit themselves to technological changes within living memory of many people, Acemoglu and Johnson take a more expansive view, aiming to encompass a millennium of progress in around half a thousand pages. For the paperback edition of their work, the two MIT professors—who were among the 2024 Nobel Prize winners not connected to or relying on AIFootnote 73—summarize a key finding as being that “really bad outcomes are possible when deluded technology leaders are able to impose their messianic visions on society” (Acemoglu & Johnson, p. xiii).

None of the books discussed in this essay are entirely pessimistic about AI. Yet they are all deeply suspicious of the claims of techno-utopians that AI will yield nothing but benefits. The venture capitalist Marc Andreessen epitomized this Panglossian view in a document literally called “The Techno-Optimist Manifesto,” asserting that the productivity boost from technologies such as AI “drives wages up, not down. This is perhaps the most counterintuitive idea in all of economics, but it’s true, and we have 300 years of history that prove it.”Footnote 74 (It is not true, and he does not prove it.) Amazon’s Jeff Bezos pushed this into caricature in his final letter to shareholders as chief executive officer. Trying to head off criticisms of worker treatment that had led to fights over unionization and concerns about worker safety, Bezos promised a new commitment to “a better vision for our employees’ success.”Footnote 75 The form this vision would take could have been lifted from Brave New World, including “new automated staffing schedules that use sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion.”Footnote 76

It is true that most of us are materially better off than our ancestors. The reason is only partly technology, however. Acemoglu and Johnson argue that it is also because “citizens and workers in early industrial societies organized, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably” (Acemoglu & Johnson, p. 7).

There are clear echoes of Harding here, but Acemoglu and Johnson are calling for something more organized and strategic. They also interrogate the idea that technological progress is always economically progressive. On the contrary, they argue, the last thousand years saw serial fights over the direction of technology and the type of progress, with accompanying winners and losers (Acemoglu & Johnson, p. 34). Their main target is the “productivity bandwagon”—the assumption that new machines that increase productivity will also increase wages and benefit everyone, not merely the entrepreneurs and owners of capital (Acemoglu & Johnson, p. 14). The book considers the rise of modern agriculture, the industrial revolution, and the emergence of digital technologies. Each of these periods saw a tension between who led change and who benefited from it.

The main battleground is domestic, including traditional political institutions and organized labor.Footnote 77 A signal inflection point has been the move away from technology that created new tasks and opportunities to the automation of work and cutting labor costs, which they attribute to “lack of input and pressure from workers, labor organizations, and government regulation” (Acemoglu & Johnson, p. 37). In some ways, it is ironic that digital technology played such a role in this context. Echoing Harding again, they note that the early days of the computer revolution were defined by decentralization and freedom, bordering on anarchy (Acemoglu & Johnson, pp. 253–54). An observer might have predicted that subsequent decades would “further bolster countervailing powers against big business, create new productive tools for workers, and lay the foundations of even stronger shared prosperity” (Acemoglu & Johnson, p. 255).

Instead, digital technologies became “the graveyard of shared prosperity” (Acemoglu & Johnson, p. 255). Though other factors were at play—notably globalization and the weakening of the labor movement in the United States in particular—the change in the direction of technology from the 1970s onward was decisive. Much of the energy and creativity has gone into replacing humans through automation and surveilling those workers who remain (Acemoglu & Johnson, pp. 297–338). The reason for that trend appears to have less to do with the evilness of corporate titans than the abdication of public officials. Left unfettered, the incentives embedded within contemporary capitalism encourage labor-saving technologies that offer immediate cost reductions and enhanced managerial control. The diffuse and longer-term benefits of complementing human work are harder to monetize and rarely rewarded by financial markets. Combined with diminished countervailing pressures from unions or organized civil society, innovation is steered towards displacing workers rather than empowering them, entrenching a model of progress that concentrates gains in the hands of owners and investors.

In opposition to machine intelligence, Acemoglu and Johnson encourage the pursuit of machine usefulness, meaning the search for ways in which AI could better complement human workers, rather than merely replacing them (Acemoglu & Johnson, pp. 327–32). Their call to action can be vague at times: “altering the narrative, building countervailing powers, and developing technical, regulatory, and policy solutions to tackle specific aspects of technology’s social bias” (Acemoglu & Johnson, p. 38). The final chapter does offer a raft of policy reforms, but their central message is that passively accepting the social costs of technology is not the only option. The Gilded Age of the late nineteenth century also saw periods of rapid technological change—railways and oil, steel and finance—along with rent-seeking from the robber barons of the era. Checking the massive inequality that followed depended on action by civil society, notably journalists (the original muckrakers) and organized labor, driving a political movement that in turn led to greater regulation and efforts to rein in the power of corporate titans through antitrust and campaign finance reform (Acemoglu & Johnson, pp. 383–86).

Unfortunately, such institutions barely exist today. Although there is impressive reporting on the power of the modern digital robber barons, the decline of journalism as an industry has been exacerbated by the rise of disinformation supercharged by generative AI.Footnote 78 “Democracy dies in darkness” (Acemoglu & Johnson, p. 352), Acemoglu and Johnson note, alluding to the mission statement adopted by the Washington Post at the start of President Donald Trump’s first term in office.Footnote 79 “But it also struggles under the light provided by modern artificial intelligence” (Acemoglu & Johnson, p. 352). It is a telling indicator of the impact of market forces that the Post more recently abandoned that aspirational slogan for the less confrontational and market-friendly “Riveting Storytelling for All of America.”Footnote 80

With respect to organized labor, the marginalization of the working class more generally has been linked to the rise of authoritarian politics in the United States and elsewhere.Footnote 81 A rare example of unions mobilizing workers to limit the impact of AI came from the very elites typically derided by populist politicians: Hollywood. The Writers Guild of America strike in 2023 sought, among other things, to limit the use of AI to helping with research or facilitating scriptwriting—rather than replacing writers.Footnote 82 The strike gained extraordinary publicity in part because it naturally impacted the media landscape, but also because workers effectively exercising power in the United States—asserting and claiming leverage with respect to their corporate masters—is very much the exception rather than the norm.

Acemoglu and Johnson’s other prescriptions include measures to alter the incentives for technology companies. Taxes currently encourage automation, for example. In the United States, a company investing in automation equipment or software pays a fifth of the tax it would face if hiring workers to perform the same tasks (Acemoglu & Johnson, p. 406). They do not support an automation tax, but suggest limiting such incentives, along with denying patent protection for surveillance technologies (Acemoglu & Johnson, p. 403).

A larger problem of incentives is the extent to which revenue streams for tech companies currently rely on advertising as opposed to, say, subscriptions. Subscription models encourage companies to curate quality experiences for their users; advertising generally rewards the quantity of engagement. Few users are willing to pay for content, however, giving rise to the model of funding that content by monetizing the personal data of those users. That model was summed up in the pithy phrase that “if something is free, you are the product,”Footnote 83 later termed “surveillance capitalism” by Shoshana Zuboff.Footnote 84

As a means of shifting these incentives, Acemoglu and Johnson propose a “nontrivial digital advertising tax.” Although unable to put a price on this, they are optimistic that it would encourage “alternative business models” (Acemoglu & Johnson, pp. 413–14). In addition to subscriptions, which fund companies such as Netflix and the New York Times, those alternative models include sites like Wikipedia, which draws on the wisdom of the crowd and structured governance institutions to develop and maintain standards. More importantly, it is funded through a not-for-profit foundation (Acemoglu & Johnson, p. 379).

There are parallels, here, with the internationalist approaches running through Zaidi’s work. For all their discussion of organized labor, however, it is telling that—presumably in deference to their American audience—Acemoglu and Johnson do not bother to mention a third obvious possibility: public ownership.Footnote 85 They prefer to work within the existing regime, tweaking incentives and relying in large part on existing structures and norms.

The limits of that approach were on display in the corporate drama that unfolded in OpenAI, the company behind ChatGPT. OpenAI was established as a non-profit in 2015, heralded with lofty statements as to how this status enabled it to “benefit humanity as a whole, unconstrained by a need to generate financial return.”Footnote 86 As the costs of training the large language model that became its signature product escalated, the company announced that it would adopt a “capped-profit” structure, allowing it “to rapidly increase our investments in compute and talent.”Footnote 87 The contradiction between these two worldviews unfolded in the spectacle of the not-for-profit Board firing CEO Sam Altman in November 2023—only for him to be reinstated days later and the Board itself being replaced. As various commentators pointed out: “The money always wins.”Footnote 88

IV. The Tech-Industrial Complex

So, what is to be done? If companies cannot be trusted to self-regulate, if governments are unwilling to legislate, and if international organizations are unable to do more than coordinate—who or what might help mitigate the risks and more evenly distribute the benefits of AI?

Returning to Harding’s titular exhortation, the first answer is, of course, us. Users can choose not to support companies that ignore safety or exacerbate inequality. The problem is that individual users have trivially little leverage over companies whose business model is premised in part on hiding that lack of agency from consumers.Footnote 89 The tragedy of AI governance lies in that inverse relationship between leverage and interest: users have interest but no leverage; tech companies have leverage but no interest in constraining their own behavior if it means thereby limiting their profits.Footnote 90

Just as organized labor offered glimmers of hope in increasing workers’ bargaining power, organized users might have a greater say in how technology is developed and deployed. Global privacy movements, for example, shifted markets at least modestly, as reflected in the rise of privacy-by-design and personal data protection being seen by some companies as a market differentiator.Footnote 91 It is conceivable that similar norms might emerge in the AI space, perhaps along the lines of “responsible” AI that is more trustworthy and less prone to hallucinations, or more “open” in the sense of greater transparency as to how decisions are made and how models are trained.Footnote 92

Another form of transparency would be as to the costs of AI, notably its environmental impact.Footnote 93 Various tech companies—and some countries—have announced that their investments in AI mean that they are giving up on climate targets, though they have largely refrained from passing on these costs to consumers.Footnote 94 More information about the costs of AI, either through moves to subscription models, or at least revealing the electricity and water consumed when using the latest AI systems, might influence user and therefore corporate behavior.

Market mechanisms will not be enough, however. Days before he left office, President Joe Biden spoke of the emergence of a “tech-industrial complex,”Footnote 95 echoing Dwight Eisenhower’s own valedictory address on the “military-industrial complex.”Footnote 96 While Eisenhower was concerned about the influence of the arms industry on military procurement and defense policy,Footnote 97 Biden warned that “an oligarchy is taking shape in America of extreme wealth, power and influence that literally threatens our entire democracy.”Footnote 98

In the wake of the global financial crisis of 2007–2008, one of the lessons learned was that if certain banks were “too big to fail,” then it meant that they were too big in the first place.Footnote 99 Echoing earlier battles from the Gilded Age, there is a strong argument that tech companies—or tech entrepreneurs—that are too big to regulate are too big, period. There have, of course, been efforts to break up those companies. The U.S. Justice Department is currently suing GoogleFootnote 100 and Apple,Footnote 101 while the Federal Trade Commission has ongoing actions against Amazon,Footnote 102 having unsuccessfully brought actions against MicrosoftFootnote 103 and Meta.Footnote 104 In addition to its own antitrust actions,Footnote 105 the European Union has linked size with more elaborate obligations and reporting requirements for “gatekeepers” under the Digital Markets ActFootnote 106 and “very large” online platforms and search engines under the Digital Services Act.Footnote 107 Only China, however, has successfully broken up tech companies in a purge lasting from 2020 to 2023, wiping trillions of dollars off the share value of those companies,Footnote 108 with Alibaba divided into six new entities.Footnote 109 These were costs that Beijing was willing to bear, but at which Washington or Brussels might balk, particularly given President Trump’s new chumminess with the tech elite.

Returning to the example of the East India Company,Footnote 110 an alternative to divestiture is nationalization. In the wake of the 1857 Indian Rebellion, Britain concluded that leaving such economic and military power in private hands was untenable. The Government of India Act 1858 dissolved the Company, transferred all its territories, armies, and administrative machinery to the Crown, and replaced its directors with a Secretary of State for India, directly answerable to Parliament.Footnote 111 It is conceivable that a similar fate could befall today’s most powerful AI companies—if their systems became so essential to national security or economic stability that government oversight proved insufficient, with states moving from regulation to outright control, treating AI infrastructure as public utilities or national assets. For the time being, there is no such appetite for confrontation with the technology companies, a timidity reinforced by the fear of holding back innovation or falling behind geopolitical rivals, or the more mundane concerns of running for political office in a social media age.Footnote 112

As for international institutions, another Eisenhower speech from the very first year of his presidency suggests the possibilities and limitations. By 1953, the technological internationalist moment had passed and the prospect of international control of nuclear weapons faded. Eisenhower proposed an alternative in his “Atoms for Peace” address to the UN. If the earlier idea had been utopian, this was idealistic in a different way: instead of concentrating nuclear materials and expertise in a supranational body, they would be disseminated across the globe—encouraging states to use them for peaceful purposes, in exchange for commitments to renounce the search for the bomb.Footnote 113

The analytical, political, and structural limits of this analogy have been touched on earlier.Footnote 114 The biggest difference between attempts to control nuclear power in the 1950s and AI today, however, may be the historical context. For even as Eisenhower spoke in New York, the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt.Footnote 115 The “dread secret” of those weapons, he warned, was no longer confined to the United States and no longer containable. To do nothing was to accept the hopeless finality that “two atomic colossi are doomed malevolently to eye each other indefinitely across a trembling world.”Footnote 116

There is, at present, no such threat from AI—nor is there comparably visceral evidence of its potential for harm. It is possible that concerns are overblown. Or, as some would argue, AI itself may help solve these and sundry other problems.Footnote 117 If not, however—if the unchecked power of tech companies and their silicon sovereigns is unable to be constrained or contained by users, the market, or states in thrall to the new mammon—then global institutions that might have helped to prevent the first true AI emergency may be too late to avert the second.

Footnotes

*

David Marshall Professor of Law and Vice Provost (Educational Innovation) at the National University of Singapore; founding Dean of NUS College; Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law. Many thanks to Fakhar Abbas, Ingrid Brunk, Chua Guang Rong, Viv Chesterman, Jeffrey Dunoff, Jungpil Hahn, Monika Hakimi, Shaleen Khanal, Tristan Koh, Ernest Lim, Valerie Lim, Renae Loh, Hakim Norhashim, Eric Orlowski, Eka Nugraha Putra, Shrestha Saha, Araz Taeihagh, Ming Tan, Tan Hsien-Li, Jun Yu, and Audrey Yue for comments on an earlier draft. Errors, omissions, and hallucinations are attributable to the author alone.

References

1 The term “artificial intelligence” is used in this Essay to refer to systems that can apply cognitive functions to specific tasks typically undertaken by a human. For a discussion of attempts to define AI, see Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 1–5 (3rd ed. 2010). Four broad definitional approaches can be identified: acting humanly (the famous Turing Test), thinking humanly (modeling cognitive behavior), thinking rationally (building on the logicist tradition), and acting rationally (a rational-agent approach favored by Russell and Norvig as it is not dependent on a specific understanding of human cognition or an exhaustive model of what constitutes rational thought).

2 Arvind Narayanan & Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024).

3 Nobel Prizes 2024, at https://www.nobelprize.org/all-nobel-prizes-2024. Goldman Sachs has estimated that by 2034, U.S. GDP will increase by more than 2% as a result of AI. AI May Start to Boost U.S. GDP in 2027, Goldman Sachs (Nov. 7, 2023), at https://www.goldmansachs.com/insights/articles/ai-may-start-to-boost-us-gdp-in-2027. McKinsey’s Global Institute goes further and predicts a 5–13% boost by 2040. Michael Chui et al., The Economic Potential of Generative AI: The Next Productivity Frontier, McKinsey & Co. (June 2023), at https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier.

4 See, e.g., Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (2020); Michael Guihot & Lyria Bennett Moses, Artificial Intelligence, Robots and the Law (2020); Matt Hervey & Matthew Lavy, The Law of Artificial Intelligence (2020); Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020); Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (2021); Dominika Ewa Harasimiuk & Tomasz Braun, Regulating Artificial Intelligence: Binary Ethics and the Law (2021); Jinghan Zeng, Artificial Intelligence with Chinese Characteristics: National Strategy, Security and Authoritarian Governance (2022); Mark Chinen, The International Governance of Artificial Intelligence (2023); The Cambridge Handbook of the Law, Policy, and Regulation for Human-Robot Interaction (Woodrow Barfield, Yueh-Hsuan Weng & Ugo Pagallo eds., 2024); Andeed Ma, James Ong & Siok Siok Tan, AI for Humanity: Building a Sustainable AI for the Future (2024); Tshilidzi Marwala, The Balancing Problem in the Governance of Artificial Intelligence (2024).

5 See, e.g., Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (2005); Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (2016); Reid Hoffman and Greg Beato, Superagency: What Could Possibly Go Right with Our AI Future (2025).

6 See, e.g., Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021).

7 Dev Kundaliya, Former Google CEO Eric Schmidt Urges AI Acceleration, Dismisses Climate Goals, Computing UK (Oct. 8, 2024); Dario Amodei, Machines of Loving Grace: How AI Could Transform the World for the Better (Oct. 2024), at https://darioamodei.com/machines-of-loving-grace.

8 Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014); Robert Sparrow, Friendly AI Will Still Be Our Master, 39 AI & Soc’y 2439 (2024); Joud Mohammed Alkhalifah et al., Existential Anxiety About Artificial Intelligence (AI): Is It the End of Humanity Era or a New Chapter in the Human Revolution, 15 Frontiers Psych. 1368122 (2024).

9 Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (2018); Andrew R. Chow & Billy Perrigo, The AI Arms Race Is Changing Everything, Time (Feb. 17, 2023). Cf. Kerry McInerney, Yellow Techno-Peril: The “Clash of Civilizations” and Anti-Chinese Racial Rhetoric in the US–China AI Arms Race, 11 Big Data & Soc’y (2024).

10 Anu Bradford, Digital Empires: The Global Battle to Regulate Technology 385 (2023).

11 Cf. Chris Miller, Chip War: The Fight for the World’s Most Critical Technology (2022); The United States Announces Export Controls to Restrict China’s Ability to Purchase and Manufacture High-End Chips, 117 AJIL 144 (2023).

12 Henry A. Kissinger, Eric Schmidt & Daniel Huttenlocker, The Age of AI: And Our Human Future (2021); Henry A. Kissinger, Craig Mundie & Eric Schmidt, Genesis: Artificial Intelligence, Hope, and the Human Spirit (2024).

13 International Telecommunications Union, Population of Global Offline Continues Steady Decline to 2.6 Billion People in 2023 (Sept. 12, 2023), at https://www.itu.int/en/mediacentre/Pages/PR-2023-09-12-universal-and-meaningful-connectivity-by-2030.aspx.

14 UN AI Advisory Body, Governing AI for Humanity: Final Report, 9 (Sept. 2024), at https://www.un.org/en/ai-advisory-body. Disclosure: the author served as principal researcher for the UN AI Advisory Body at the time the report was drafted. Cf. Heidi Aly, Digital Transformation, Development and Productivity in Developing Countries: Is Artificial Intelligence a Curse or a Blessing?, Rev. Eco. & Pol. Sci. 238 (2020).

15 A counterargument is that some authoritarian governments use AI and/or co-opt technology companies to reinforce their power, though this typically takes the form of outsourcing government functions or relying on third-party tools to engage in surveillance or the suppression of dissent. See, e.g., Steven Feldstein, The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance (2021).

16 Stanford University Human-Centered AI, AI Index Report 2023, at 50 (Apr. 2023), at https://hai.stanford.edu/assets/files/hai_ai-index-report_2023.pdf. A proxy for the computing power at an organization’s disposal is the number of parameters. Industry models in 2021 were on average twenty-nine times larger than those in academic institutions. Nur Ahmed, Muntasir Wahed & Neil C. Thompson, The Growing Influence of Industry in AI Research, 379 Sci. 884 (2023). Private investment in AI in 2022 was eighteen times greater than in 2013. In 2021, the U.S. government allocated US$1.5 billion to non-defense academic research into AI; Google spent that much on DeepMind alone. Talent has followed. The number of AI research faculty in universities has not risen significantly since 2006, while industry positions have grown eightfold. Two decades ago, only about 20% of graduates with a PhD in AI went to industry; today around 70% do.

17 Simon Chesterman, From Ethics to Law: Why, When, and How to Regulate AI, in The Handbook of the Ethics of AI 113 (David J. Gunkel ed., 2024).

18 See Shaleen Khanal, Hongzhou Zhang & Araz Taeihagh, Why and How Is the Power of Big Tech Increasing in the Policy Process? The Case of Generative AI, 44 Pol. & Soc. (2025).

19 See, e.g., Susan Strange, The Retreat of the State: The Diffusion of Power in the World Economy (1996); Stephen D. Krasner, Sovereignty: Organized Hypocrisy (2001); Christopher May, Global Corporations in Global Governance (2015).

20 See Antony Anghie, Imperialism, Sovereignty and the Making of International Law (2005); H.V. Bowen, The Business of Empire: The East India Company and Imperial Britain, 1756–1833 (2006); Emily Erikson, Between Monopoly and Free Trade: The English East India Company, 1600–1757 (2017).

21 Elon Musk, to pick an obvious example, has extraordinary leverage through his ownership of SpaceX, which dominates space exploration and satellite internet, Tesla, which revolutionized the electric vehicle market, and X (née Twitter), which offers a global platform for his views. Ronan Farrow, Elon Musk’s Shadow Rule, New Yorker (Aug. 21, 2023); Walter Isaacson, Elon Musk (2023).

22 Lewis Carroll, Through the Looking-Glass, Ch. 2 (1872).

23 Jinseok Kim, Author-Based Analysis of Conference Versus Journal Publication in Computer Science, 70 J. Ass’n Info. Sci. & Tech. 71 (2019).

24 The late futurist Roy Amara is typically credited with coining the term. See, e.g., Doc Searls, The Intention Economy: When Customers Take Charge 257 (2012). Cf. J.C.R. Licklider, Libraries of the Future 17 (1965) (referring to a “modern maxim” that states that people “tend to overestimate what can be done in one year and to underestimate what can be done in five or ten years”). For their part, governments face a particular challenge, being forced to choose between under-regulating an emerging field, possibly exposing their citizens to risk, or over-regulating, and perhaps limiting innovation or driving it elsewhere. David Collingridge described this dilemma half a century ago: In the early stages of innovation, exercising control would be easy—but not enough is known about the potential harms to warrant slowing development. By the time those harms are apparent, control has become costly and difficult. David Collingridge, The Social Control of Technology 19 (1980).

25 Verity Harding, AI Needs You: How We Can Change AI’s Future and Save Our Own (2024).

26 Waqar H. Zaidi, Technological Internationalism and World Order: Aviation, Atomic Energy, and the Search for International Peace, 1920–1950 (2023).

27 Daron Acemoglu & Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (2023).

28 “Move fast and break things” was an early motto at Facebook intended to push developers to take risks; the phrase appeared on office posters and featured in a letter from Mark Zuckerberg to investors when the company went public in 2012. United States Securities and Exchange Commission, Form S-1 Registration Statement of Facebook, Inc. (Feb. 1, 2012), at https://www.sec.gov/Archives/edgar/data/1326801/000119312512034517/d287954ds1.htm. Over time, it came to be embraced as a mantra applicable to technological disruption more generally, adopted by countless Silicon Valley imitators. As Facebook matured, however, and as the potential harms caused by such disruption grew, the slogan fell from favor. Jonathan Taplin, Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy (2017); Hemant Taneja, The Era of “Move Fast and Break Things” Is Over, Harv. Bus. Rev. (Jan. 22, 2019); see also Simon Chesterman, “Move Fast and Break Things”: Law, Technology, and the Problem of Speed, 33 Sing. Acad. L.J. 5 (2021).

29 Harding, supra note 25, at 180 (“As leading AI democracies we must ask ourselves how our use of AI, the examples we set, will shape our societies, our governments, and the world at large.”) (emphasis in original).

30 See also Taina Bucher, If … Then: Algorithmic Power and Politics (2018).

31 Harding, supra note 25, at 224. She describes the book as “my love letter to the painstaking and generally unglamourous world of policy-making in a democracy.”

32 Roger D. Launius, Reaching for the Moon: A Short History of the Space Race (2019); Albert K. Lai, The Cold War, the Space Race, and the Law of Outer Space: Space for Peace (2021).

33 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, Jan. 27, 1967, 18 UST 2410, at https://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/introouterspacetreaty.html. See also Tanja Masson-Zwaan & Roberto Cassar, The Peaceful Uses of Outer Space, in The Oxford Handbook of United Nations Treaties 181 (Simon Chesterman, David M. Malone, Santiago Villalpando & Alexandria Ivanovic eds., 2019).

34 Neither outer space nor AI were ever a truly “blank sheet.” See, e.g., Vladlen S. Vereshchetin & Gennady M. Danilenko, Custom as a Source of International Law of Outer Space, 13 J. Space L. 22 (1985).

35 Lewis D. Solomon, The Privatization of Space Exploration: Business, Technology, Law and Policy (2008); Chad Anderson, Rethinking Public–Private Space Travel, 29 Space Pol’y 266 (2013); Johan Eriksson & Lindy M. Newlove-Eriksson, Outsourcing the American Space Dream: SpaceX and the Race to the Stars, 21 Astropolitics 46 (2023).

36 Mary Ziegler, Reproduction and the Constitution in the United States (2022); Elyshia Aseltine & Sheldon Ekland-Olson, Abortion in the United States: The Moral and Legal Landscape (2024). Topics like abortion may not be as sensitive in other countries, but given the prominence of the United States in this space it remains an odd choice as an illustration of effective governance.

37 See, e.g., Araz Taeihagh, Governance of Artificial Intelligence, 40 Pol. & Soc. 137 (2021); Markus Furendal & Karim Jebari, The Future of Work: Augmentation or Stunting?, 36 Phil. & Tech. (2023); Jochen Wirtz, Werner H. Kunz, Nicole Hartley & James Tarbit, Corporate Digital Responsibility in Service Firms and Their Ecosystems, 26 J. Serv. Rsch. 173 (2023). The innovative multi-stakeholder entity ICANN may embody transparent deliberation and decision-making, but it is with regard to an exceptionally narrow and easily defined category of disputes over unique identifiers on the internet.

38 See, e.g., Simon Chesterman, Weapons of Mass Disruption: Artificial Intelligence and International Law, 10 Cambridge Int’l L.J. 181 (2021). On the analogy between AI and nuclear energy, see infra notes 65-68.

39 See, e.g., Kristin Komives et al., Water, Electricity, and the Poor: Who Benefits from Utility Subsidies? (2005).

40 The Cambridge Analytica scandal of 2016 and the possibility that it affected that year’s U.S. presidential election started a flurry of efforts in the area of AI governance. Simon Chesterman, Yuting Gao, Jungpil Hahn & Valerie Sticher, The Evolution of AI Governance, 57 IEEE Computer 80 (2024).

41 Theodore Schleifer & Madeleine Ngo, Elon Musk and His Allies Storm Into Washington and Race to Reshape It, N.Y. Times (Jan. 29, 2025).

42 Shawn McCreesh, Trump and Musk’s Relationship Melts Down in Spectacular Fashion, N.Y. Times (June 5, 2025).

43 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Exec. Order 14110, 88 Fed. Reg. 75191, (Oct. 30, 2023); Cary Coglianese, People and Processes: AI Governance Under Executive Order 14110, 49 Admin. & Reg. L. News 9 (2023).

44 Marco Almada & Anca Radu, The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy, 25 German L.J. 646 (2024).

45 William Langley, Eleanor Olcott & Ryan McMorrow, China Proposes Global Body to Govern Artificial Intelligence, Fin. Times (July 26, 2025).

46 Department for Science, Innovation and Technology and AI Safety Institute, International AI Safety Report 2025 (Jan. 29, 2025), at https://www.gov.uk/government/publications/international-ai-safety-report-2025.

47 Billy Perrigo, Safety Takes a Backseat at Paris AI Summit, as U.S. Pushes for Less Regulation, Time (Feb. 11, 2025).

48 Madeleine Bunting, Bono Talks of US Crusade, Guardian (June 16, 2005).

49 Jack Godwin, The Arrow and the Olive Branch: Practical Idealism in U.S. Foreign Policy 9 (2008).

50 See generally Ronald E. Powaski, American Presidential Statecraft: From Isolationism to Internationalism (2017).

51 The language at times resonates with the techno-utopianism of today, with nuclear power plants described as the “cathedrals” of the twentieth century. See, e.g., Jacques Leclercq, The Nuclear Age (1986).

52 See, e.g., Mark Mazower, Governing the World: The History of an Idea (2012); Glenda Sluga, Internationalism in the Age of Nationalism (2013).

53 Simon Chesterman, David M. Malone & Santiago Villalpando, Introduction, in The Oxford Handbook of United Nations Treaties 1 (Simon Chesterman, David M. Malone, Santiago Villalpando & Alexandra Ivanovic eds., 2019).

54 Gerrit W. Gong, The Standard of “Civilization” in International Society (1984).

55 Martti Koskenniemi, The Gentle Civilizer of Nations: The Rise and Fall of International Law 1870–1960, at 41 (2001) (arguing that many of the ideals and institutions now framed as universal were shaped by—and continue to bear—the imprint of their colonial origins).

56 David E. Omissi, Air Power and Colonial Control: The Royal Air Force, 1919–1939 (1990).

57 Robert L. Brown, Nuclear Authority: The IAEA and the Absolute Weapon 55–61 (2015).

58 Disarmament in the Air, Economist (Feb. 25, 1933).

59 H.G. Wells, The Shape of Things to Come (1933); John S. Partington, H.G. Wells and the World State: A Liberal Cosmopolitan in a Totalitarian Age, 17 Int’l Relations 233 (2003).

60 Quincy Wright, Enforcement of International Law, 38 ASIL Proc. 77, 85 (1944).

61 Washington Conversations on International Peace and Security Organization, Dumbarton Oaks Conference, Ch. VIII, Sec. B, para. 5 (Oct. 7, 1944), at https://www.ibiblio.org/pha/policy/1944/441007a.html.

62 Charter of the United Nations, Art. 43, June 26, 1945, 1 UNTS XVI (in force October 24, 1945), at https://www.un.org/en/about-us/un-charter/full-text.

63 The Military Staff Committee that was intended to “advise and assist” the Security Council on “the employment and command of forces placed at its disposal” remains a curiosity, continuing to meet despite having no formal agenda items since it reported in 1948 that it was unable to complete the mandate given to it two years earlier. Id. Art. 47(1).

64 Simon Chesterman, Beyond Asimov’s Three Laws: The Case for an International AI Agency, Engineering & Tech. (Aug. 4, 2021); Gary Marcus & Anka Reuel, The World Needs an International Agency for Artificial Intelligence, Say Two AI Experts, Economist (Apr. 18, 2023).

65 Prarthana Prakash, OpenAI’s Sam Altman and Google’s Sundar Pichai Are Now Begging Governments to Regulate the A.I. Forces They’ve Unleashed, Fortune (May 24, 2023).

66 Michelle Nichols, UN Chief Backs Idea of Global AI Watchdog like Nuclear Agency, Reuters (June 12, 2023).

67 UN AI Advisory Body, Governing AI for Humanity: Final Report, supra note 14, para. 181.

68 Marcus & Reuel, supra note 64.

69 Global Digital Compact, para. 72 (United Nations, New York, Sept. 2024), at https://www.un.org/techenvoy/global-digital-compact.

70 GA Res 78/265, Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development, para. 5 (Mar. 21, 2024).

71 Id., para. 9 (emphasis in original).

72 Former Secretary-General Dag Hammarskjöld is said to have quipped that organizations such as the UN were not created to take humanity to heaven, but to save it from hell.

73 Jeanna Smialek, Three Receive Nobel in Economics for Research on Global Inequality, N.Y. Times (Oct. 14, 2024).

74 Marc Andreessen, The Techno-Optimist Manifesto, Andreessen Horowitz (Oct. 16, 2023), at https://a16z.com/the-techno-optimist-manifesto (emphasis in original).

75 Jeff Bezos, 2020 Letter to Shareholders, Amazon (Apr. 16, 2021), at https://www.aboutamazon.sg/news/company-news/2020-letter-to-shareholders. Bezos continues to serve as executive chairman of Amazon.

76 Id.

77 Though somewhat interested in other countries’ approaches, the global nature of the problem is largely ignored, except to the extent that “international cooperation” might be necessary to implement taxation measures proposed. Acemoglu & Johnson, supra note 27, at 415. The authors acknowledge the “central question” of whether redirecting technology in the West “would be of any use if China continues to pursue automation and surveillance.” They conclude that it the answer is “likely yes,” on the basis that China is a “follower” in most frontier technologies. Id. at 395.

78 Kokil Jaidka et al., Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy, 6 Digit. Gov’t: Rsch. & Prac. (Feb. 2025).

79 Paul Farhi, The Washington Post’s New Slogan Turns Out to Be an Old Saying, Wash. Post (Feb. 24, 2017). The phrase about government accountability was popularized by journalist Bob Woodward, who credited it in turn to a first amendment case. See Detroit Free Press et al. v. Ashcroft et al., 303 F.3d 681 (6th Cir., 2002) (“Democracies die behind closed doors.”).

80 Benjamin Mullin, The Washington Post’s New Mission: Reach “All of America,” N.Y. Times (Jan. 17, 2025).

81 See generally Marc J. Hetherington & Jonathan D. Weiler, Authoritarianism and Polarization in American Politics (2009); The Global Rise of Authoritarianism in the 21st Century: Crisis of Neoliberal Globalization and the Nationalist Response (Berch Berberoglu ed., 2021).

82 Alexandra Curren, Digital Replicas: Harm Caused by Actors’ Digital Twins and Hope Provided by the Right of Publicity, 102 Tex. L. Rev. 155 (2023).

83 This idea is sometimes traced back to a 1973 video by the artist Richard Serra and Carlota Fay Schoolman. Richard Serra & Carlota Schoolman, Television Delivers People, Museum Mod. Art (1973), at https://www.moma.org/collection/works/118185 (“You are the end product of t.v.”). Cf. Claire Wolfe, Little Brother Is Watching You: The Menace of Corporate America (1999) (“You’re not the customer any more. You’re simply a ‘resource’ to be managed for profit.”).

84 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019); see also Nick Srnicek, Platform Capitalism (2016); Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (2018); Yanis Varoufakis, Technofeudalism: What Killed Capitalism (2023).

85 See, e.g., Tim Christiaens, Nationalize AI, 40 AI & Soc’y. 1147 (2025).

86 Greg Brockman & Ilya Sutskever, Introducing OpenAI, OpenAI (Dec. 11, 2015), at https://openai.com/index/introducing-openai.

87 Greg Brockman & Ilya Sutskever, OpenAI LP, OpenAI (Mar. 11, 2019), at openai.com/blog/openai-lp.

88 Charlie Warzel, The Money Always Wins, Atlantic (Nov. 21, 2023). That conclusion was underlined by the subsequent departure from OpenAI of Jan Leike and Ilya Sutskever in May 2024, hollowing out the “super-alignment” team at the most influential AI company on the planet. Will Knight, OpenAI’s Long-Term AI Risk Team Has Disbanded, Wired (May 17, 2024). To the extent that the OpenAI saga reflected debates over the existential threat that might be posed by AI, it was a victory of the capitalists over the catastrophists. Kevin Roose, A.I. Belongs to the Capitalists Now, N.Y. Times (Nov. 22, 2023). See also Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (2025).

89 See, e.g., Claudio Celis Bueno, The Attention Economy: Labour, Time, and Power in Cognitive Capitalism (2017); Jenny Odell, How to Do Nothing: Resisting the Attention Economy (2019); Timothy Aylsworth & Clinton Castro, Kantian Ethics and the Attention Economy: Duty and Distraction (2024); Karen Nelson-Field, The Attention Economy: A Category Blueprint (2024).

90 See Simon Chesterman, The Tragedy of AI Governance, in Contemporary Debates in the Ethics of Artificial Intelligence (Atoosa Kasirzadeh, Sven Nyholm & John Zerilli eds., forthcoming 2025).

91 Garrett Johnson, Economic Research on Privacy Regulation: Lessons from the GDPR and Beyond (2024).

92 Tita Alissa Bach, Magnhild Kaarstad, Elizabeth Solberg & Aleksandar Babic, Insights into Suggested Responsible AI (RAI) Practices in Real-World Settings: A Systematic Literature Review, AI & Eth. (forthcoming 2025).

93 See Crawford, supra note 6; Measuring the Environmental Impacts of Artificial Intelligence Compute and Applications: The AI Footprint (OECD Digital Economy Papers, Nov. 15, 2022); Nir Kshetri, The Environmental Impact of Artificial Intelligence, 26 IT Professional 9 (2024).

94 See generally Stephanie Jamison, Mauro Macchi, Maruicio Bermudez Neubauer & Babak Moussavi, Destination Net Zero: Fast-Tracking Progress, Accenture (Nov. 10, 2024), at https://www.accenture.com/us-en/insights/sustainability/destination-net-zero.

95 Erica L. Green, In Farewell Address, Biden Warns of an “Oligarchy” Taking Shape in America, N.Y. Times (Jan. 15, 2025).

96 President Dwight D. Eisenhower’s Farewell Address (Jan. 17, 1961), at https://www.archives.gov/milestone-documents/president-dwight-d-eisenhowers-farewell-address.

97 James Ledbetter, Unwarranted Influence: Dwight D. Eisenhower and the Military-Industrial Complex (2011); cf. Gordon Adams, The Politics of Defense Contracting: The Iron Triangle (1981).

98 Green, supra note 95.

99 Greenspan Calls to Break Up Banks “Too Big to Fail, N.Y. Times (Oct. 15, 2009).

100 United States v. Google LLC, 1:23-cv-00108 (E.D. Va., 2023).

101 U.S. and Plaintiff States v. Apple Inc., at https://www.justice.gov/atr/case/us-and-plaintiff-states-v-apple-inc.

102 Amazon.com, Inc. (Amazon eCommerce), Federal Trade Commission (Nov. 5, 2024), at https://www.ftc.gov/legal-library/browse/cases-proceedings/1910129-1910130-amazoncom-inc-amazon-ecommerce.

103 Kellen Browning, David McCabe & Karen Weise, Judge Rejects F.T.C. Delay of $70 Billion Microsoft-Activision Deal, N.Y. Times (July 11, 2023).

104 Diane Bartz, FTC Withdraws from Adjudication in Fight with Meta over Within Deal, Reuters (Feb. 10, 2023).

105 See Antonio Manganelli & Antonio Nicita, Regulating Digital Markets: The European Approach (2022); The Evolving Governance of EU Competition Law in a Time of Disruptions: A Constitutional Perspective (Carlo Maria Colombo, Kathryn Wright & Mariolina Eliantonio eds., 2024).

106 Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on Contestable and Fair Markets in the Digital Sector and Amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) 2022 (EU). Six companies have been so designated: Alphabet (the parent company of Google), Amazon, Apple, ByteDance (which owns TikTok), Meta, and Microsoft.

107 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act) 2022 (EU).

108 Lilian Zhang, A Timeline of China’s 32-Month Big Tech Crackdown that Killed the World’s Largest IPO and Wiped Out Trillions in Value, S. China Morning Post (July 15, 2023).

109 Alibaba Starts a Spate of Spinoffs, India Bus. J. (June 2023).

110 Note 20 supra.

111 See Anthony Webster, The Twilight of the East India Company: The Evolution of Anglo-Asian Commerce and Politics, 1790-1860 (2009); Christina Welsch, The Company’s Sword: The East India Company and the Politics of Militarism, 1644–1858 (2022).

112 See, e.g., Kevin Wei et al., How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance, in Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society 1539–55 (2025).

113 Address by Mr. Dwight D. Eisenhower, President of the United States of America, to the 470th Plenary Meeting of the United Nations General Assembly (Atoms for Peace), United Nations (Dec. 8, 1953), at https://www.iaea.org/about/history/atoms-for-peace-speech.

114 See notes 65–70 supra.

115 Lesley M.M. Blume, Fallout: The Hiroshima Cover-Up and the Reporter Who Revealed It to the World (2020).

116 Atoms for Peace, supra note 113.

117 See, e.g., Lawrence Lessig, Code: Version 2.0 (2006); Karen Yeung, “Hypernudge”: Big Data as a Mode of Regulation by Design, 20 Info., Comm’n & Soc’y 118 (2017); Marco Almada, Regulation by Design and the Governance of Technological Futures, 14 Eur. J. Risk Reg. 697 (2023).