Debates over the governance of artificial intelligence (AI) tend to assume that it will be important and transformative across many areas of human endeavor.Footnote 1 Though filtering out the hype can be challenging,Footnote 2 the last round of Nobel Prizes supports at least some of these claims: the prize in physics was awarded for foundational work on machine learning, while the one for chemistry went to researchers using AI to model protein folding.Footnote 3 The focus in such debates is typically over how to reap the benefits of AI while minimizing or mitigating known and unknown risks.Footnote 4 Yet, the question of how those benefits and risks will be distributed—who will win and who will lose—is less commonly articulated.
Techno-utopians enthuse that everyone will win: the pie will be bigger; the rising tide will lift all boats.Footnote 5 Concerns about inequality or the environmental impact of AIFootnote 6 are batted aside with the promise that AI itself will solve such problems.Footnote 7 Others, including a surprising fraction of those developing AI systems themselves, warn of darker, dystopian futures in which AI turns on humanity, either through misalignment of objectives or the emergence of a superintelligence that regards its creators in the way that we might regard lesser creatures such as dogs—or ants.Footnote 8 Everyone loses.
Between the extremes are those trying to think through where the gains and losses of AI will fall. In realist circles, it has become common to speak of AI in the language of an arms race, a comfortingly familiar frame that pits the West against a rising China.Footnote 9 Anu Bradford’s Digital Empires, for example, posits a battle for regulatory pre-eminence between the market-based U.S. model, a rights-based approach favored in the European Union, and China’s state-centric regime. At stake, she argues, is nothing less than the “soul of the digital economy.”Footnote 10 Her book captures the geopolitical moment, in particular the Sino-U.S. tensions playing out over access to high-performance computing power.Footnote 11 Arch-realist Henry Kissinger—unable to step away from the game of realpolitik even as he approached a hundred years of age—spent his final months writing on AI and warning of its geopolitical implications.Footnote 12
An alternative framing adopts a North-South axis, noting the 750 million people without stable electricity and the more than two billion unconnected to the internet.Footnote 13 A report by the United Nations AI Advisory Body examined some of the most prominent efforts at international governance of AI and found that seven countries (the wealthy, industrialized members of the G7) are party to all of them, while more than a hundred other states are party to none.Footnote 14 For all the worry about misuse of AI, many in developing countries are more concerned about missed uses and being left behind.
Yet, the most important divide may not be East-West or North-South but public-private. For AI is shifting economic and, increasingly, political power away from governments.Footnote 15 That is most obvious in the deployment of ever more powerful products with minimal regulatory oversight. But it is also true at the level of fundamental research. Machine learning models that power systems like ChatGPT began in publicly funded universities; by 2022, of the dozens of significant models tracked by Stanford’s AI index, all but three were released by industry.Footnote 16 Outside of the European Union, states have been wary of introducing new laws to regulate AI for fear of losing a competitive advantage or driving innovation elsewhere.Footnote 17 Efforts to regulate technology companies by applying existing laws—prominently including antitrust and intellectual property—have seen government lawyers hopelessly outgunned by their corporate counterparts.Footnote 18
All of this poses a challenge to public lawyers generally and international lawyers in particular. In structural terms, if the twentieth century saw a turn from bilateralism to multilateralism, with the emergence of truly international organizations, the twenty-first century may be witnessing a fracturing of those structures—and indeed a decline in the preeminence of states as the primary political vehicle on the global stage. Though corporatist critiques of the modern political order are hardly new,Footnote 19 the nature and scale of the power wielded by today’s tech giants rivals the role occupied by the East India Company in the early nineteenth century, when it controlled half of global trade and had its own army.Footnote 20 Today’s tech behemoths may lack that measure of economic or military power, but their global cultural and political influence is arguably greater.Footnote 21
Another dimension of the governance challenge posed by AI is time. The speed with which innovations and new capacities are now launched recalls the Red Queen’s advice to Alice in Wonderland: just to stay in one place you have to run as fast as you can; if you want to get somewhere else, you must run at least twice as fast as that.Footnote 22 Keeping up with the pace of change is reflected in the academic work of computer scientists, whose stock in trade is not books or even journal articles, but conference proceedings.Footnote 23 A relentless presentism can lead observers to overestimate the impact of a new technology in the short-term even as they underestimate it in the long-term, a phenomenon sometimes termed “Amara’s law.”Footnote 24
Such presentism and the efforts to forecast possible futures can blind us to the realization that it may be more profitable not to look forward but to look back. This is not the first time humanity has confronted a technology with the potential for good or ill, giving rise to a clash between public and private interests—nor will it be the last. In distinct ways, each of the three books considered in this essay encourages a historicist turn, situating AI and related technologies in their historical moment and seeking lessons from past technological revolutions that similarly challenged norms, even as they revealed or exacerbated inequalities through the distribution of benefits and risks.
Verity Harding’s AI Needs You draws on three late-twentieth-century examples: the space race, in vitro fertilization, and the internet.Footnote 25 Waqar Zaidi looks back further in that century to the emergence of aviation and atomic energy, along with the rise of what he terms “technological internationalism.”Footnote 26 Daron Acemoglu and Simon Johnson propose a more ambitious sweep of a thousand years of “power and progress,” although they mostly focus on the industrial revolution onward.Footnote 27
None of these books explicitly foregrounds international law or institutions as the mechanism that should play a leading role in regulating emerging technologies like AI. Nevertheless, each has much to say about the possibilities and limitations of global efforts to govern them—if only through the frustration the authors variously express at the inadequacy of market- and state-based efforts. Indeed, if there is a throughline that resonates with each work, it is the catastrophic mismatch as between those who are developing AI and those who are using it. That inequality of arms is, in turn, exacerbated by the incentives that encourage technology companies to “move fast and break things”Footnote 28 while governments are left flatfooted or left behind.
I. Artificial Intelligence Needs Who?
Harding’s résumé gives her a unique perspective on the various camps involved in technology policy, including stints as global head of policy at Google DeepMind and as an adviser to Nick Clegg, when he was deputy prime minister of Britain. She now directs the AI & Geopolitics Project at Cambridge University’s Bennett Institute for Public Policy. Her stated objective in AI Needs You is to seek lessons from past technological transformations, though she sets a somewhat arbitrary temporal cut-off at World War II and largely limits herself geographically to the United States and Britain. Indeed, the “you” of the title is often explicitly directed at those in Western democracies.Footnote 29
Nonetheless, her book is an urgent challenge to the passivity with which many people and governments appear to view the development of AI today. Like it or not, we are already shaping this emergent technology. Well-known problems, such as the potential for discrimination, are not a reflection of any inherent bias on the part of our silicon creations. On the contrary, such bias is typically a faithful reflection of the data that we have, directly or indirectly, fed into these systems. AI “is not human,” Harding observes. “But it is us”(Harding, p. 12).Footnote 30
Harding chooses three technological inflection points to argue that democratic societies can enable “a myriad of citizens” (Harding, p. 26) to take an active role in shaping the future of AI.Footnote 31 The first is the Space Race—the Cold War rivalry between the United States and the Soviet Union as each sought dominance in orbit and beyond. Technological advances in spaceflight had clear implications for security, including the ability to launch missiles and position satellites for reconnaissance, but efforts to launch uncrewed and then crewed vessels beyond our atmosphere captured the wider public imagination.Footnote 32 Harding argues that U.S. Presidents Eisenhower, Kennedy, and Johnson showed a willingness to balance national defense and the “greater ideals of international cooperation and pacificism” (Harding, p. 26), culminating in the Outer Space Treaty of 1967.Footnote 33 AI, she argues—exaggerating, to be sure—“like space in the middle of the twentieth century, is a new frontier and a blank sheet for global norms” (Harding, p. 65).Footnote 34
Her second analogy is in vitro fertilization (IVF). Following the birth of Louise Joy Brown in England in 1978, the first person conceived using the new technique, a biotechnology revolution was shaped by the “careful setting of boundaries and pursuit of consensus” (Harding, p. 26). Of the examples considered in her book, Harding finds the greatest similarity between handwringing over AI today and the biotech debates of the 1970s and 1980s, in particular fears “about corporate influence, about an unknown future, about what it means to be a human” (Harding, p. 76). Yet the goal of fertility treatment—enabling otherwise infertile families to have a child—is unusually clear and the red lines beyond it more easily debated and drawn. In the period Harding considers, these included limits on human embryo research, with provision for licensing and a “fourteen-day rule” that banned experimentation on embryos more than two weeks after fertilization (Harding, p. 95). Britain’s IVF legislation may have been progressive, but Harding over-eggs it somewhat as being “the most innovative and world-leading scientific regulation of the century” (Harding, p. 92).
The third comparison is the early development of what became the internet, which she describes as “a story of convergence: between baby boomers who grew up believing the promise of 1960s liberalism and those who felt betrayed by it; between young modernizing progressive politicians and the business titans of the new Gilded Age; and between the newly developed ‘internet community’ and those tasked with regulating it” (Harding, p. 126). She notes in particular the manner in which the Internet Corporation for Assigned Names and Numbers (ICANN), established a free and open global network through multistakeholder and multinational cooperation, epitomized by “unglamorous efforts by normal people in meeting rooms trying to make things work” (Harding, p. 27).
These are curious examples on which to rest an argument for wider and more participatory public involvement in charting the path of new technologies. Space exploration is today driven precisely by private interests, with the U.S. space program dominated by a single individual—Elon Musk—who also happens to be a leading figure in AI.Footnote 35 Reproductive rights have become some of the most politically divisive issues in the United States.Footnote 36 As for governance of the internet, decentralized control might preserve freedom and openness while encouraging innovation, but that openness has also allowed the proliferation of tools that enhance surveillance, monetize human attention, and replace human labor.Footnote 37 (Better examples of control might have included the limits placed on weapons of mass destruction, which have largely remained unused and in state hands;Footnote 38 positive examples of deployment of technology for widespread benefit might have included public utilities, from water and sanitation services to the electricity grid.Footnote 39)
Nonetheless, Harding’s central message is that the goal of developing AI that recognizes our weaknesses, aligns with our strengths, and serves the public good requires greater participation. Her book is most compelling in its argument that the future cannot be left to the innovators and disruptors alone. Unfortunately, she concedes (drawing on personal experience), compromise, humility, and “acceptance that your world view might not be correct” are not qualities found in abundance in the tech industry (Harding, p. 120).
The absence of those qualities amplified the breakdown of trust that she documents, starting with Snowden revelations around 2013 that showed even democracies were abusing the surveillance potential of the digital world. That, in turn, was compounded by the realization that the economic model of this new world relies on corporations harvesting vast amounts of data also: “Artificial Intelligence is being built in an environment of shattered trust—between citizens and their governments, between governments and powerful corporations, and between political ideologies of liberal democracy and authoritarianism” (Harding, pp. 210–11).Footnote 40
Despite the fact that the title of her book and its closing words are a call to action to the global (or at least Western) “you,” there is also a strong thread of the “great man of history” to Harding’s account. In her telling, the success of the Space Race depended on exceptional leadership by three American presidents. IVF’s threading of a needle of compromise was due in significant part to personal qualities of Baroness Warnock and Prime Minister Margaret Thatcher. And much of the discussion of the early internet focuses on the pivotal role played by the junior senator from Tennessee, Al Gore.
Indeed, one of Harding’s first lessons is the importance of “powerful political leadership to exert influence over the future direction of technology, and humanity” (Harding, pp. 65–66). Today, “while we have the technological power to lead, there is neither the political will nor capacity to do so in a way that could generate benefits for humanity worldwide” (Harding, p. 38).
Unfortunately, Donald Trump’s second presidential term offers little hope of addressing such an abdication of leadership—on the contrary, he appears to have doubled down on it with his elevation of tech titans above his own cabinet at his inauguration and offering Elon Musk office space adjacent to the White HouseFootnote 41—even if that tenancy was predictably brief.Footnote 42 One of Trump’s first acts as president was to roll back even the thin gruel of his predecessor’s executive order that had sought to bypass legislative deadlock and address at least a few of the risks posed by AI through administrative action.Footnote 43
In response to the vacuum, some saw opportunity. Enthusiasts in the EU had long hoped that its first mover status on general AI regulation would give rise to a “Brussels effect” analogous to the impact the EU has had on data protection worldwide.Footnote 44 China has also experimented with domestic regulation and flirted with a thin layer of coordination at the global level.Footnote 45 Yet the greatest consequence has been to slow down any moves towards serious governance of AI and further elevate the status and leverage of technology companies. One indication was the rise and fall of AI safety, from the Summit at Bletchley Park in Britain in November 2023, to a second meeting of what appeared to be a growing AI Safety network in Seoul the following May. Yoshua Bengio led a team that produced the International AI Safety Report in January 2025.Footnote 46 By the time the meeting moved to Paris in February 2025, however, it had been renamed the “AI Action Summit” and at times resembled less an intergovernmental meeting to rein in the private sector than it did a trade fair.Footnote 47
In a section toward the end of her book, aptly titled “The Red, White, and Blue Elephant in the Room” (Harding, p. 231), Harding argues that, half a century after the Outer Space Treaty was signed, the United States has another chance to lead. She invokes the advice Warren Buffett famously offered the musician Bono, who was seeking a strategy to convince the United States to support funding to fight AIDS in Africa: “Don’t appeal to the conscience of America, appeal to its greatness.”Footnote 48 For the time being, however, with regard to technology policy at least, the United States appears content not to lead but to follow.
II. The Rise and Fall of Technological Internationalism
The isolationism of Donald Trump is hardly unique in U.S. politics. George Washington himself famously abjured foreign entanglements in his farewell address, though subsequent presidents quickly discovered that was easier said than done.Footnote 49 Indeed, his successors were among the great architects of the modern international order—Woodrow Wilson and the League of Nations, Franklin Roosevelt and the UN.Footnote 50 Waqar Zaidi taps into this theme in his chronicle of efforts to internationalize two technologies that came to dominate twentieth century warfare: aviation and atomic energy (Zaidi, p. 26). Each case saw efforts, notably on the part of U.S. and British internationalists, to take them out of government hands and put them under the control of international organizations. Based on a doctoral thesis completed at Imperial College London under the supervision of historian David Edgerton, Zaidi keeps his focus narrow. Indeed, the term “Artificial Intelligence” appears but once on the very last page of the book—a single sentence drily noting that, like aviation and atomic energy, AI today offers economic growth, arms races, “and possibly the extinction of the human race” (Zaidi, p. 247).
In the eyes of the activists of their time, aviation and atomic energy were also potential vehicles for peace.Footnote 51 Such awesome power both demanded and made possible the creation of a new liberal world order. In the case of air power, civil aviation could bind the globe through trade and communication, while an international air force oversaw collective security (Zaidi, pp. 107–11); after the devastation of World War II, international control of nuclear weapons could prevent catastrophic conflict, strengthening the fledgling United Nations (Zaidi, pp. 204–09).
Both efforts failed. Yet their failures illuminate the politics of the time and the social currents that supported them, echoes of which lived on in the more modest oversight of civil aviation and atomic energy that did manifest, as well as in techno-globalist rhetoric that continues today.
In retrospect, those earlier efforts were part of a liberal internationalist arc that saw its beginnings in the nineteenth century (Zaidi, pp. 20–23).Footnote 52 In form, it saw the shift from bilateralism to multilateralism, notably including the first international organizations recognizable as such.Footnote 53 In substance, it encompassed efforts to restrain the use of force as well as to position international law as the conscience of the “civilized” world.Footnote 54 There is a darker aspect to this history, of course. Much as Martti Koskenniemi traced the throughline from colonialism to modern human rights,Footnote 55 proposals to use air power to maintain global order applied lessons learned in the governance of far-flung territories of empire.Footnote 56 The civilizing mission has long depended on methods that are far from civil.
The destructive power of these new technologies, along with enormous potential benefits—including, in the case of nuclear power, the prospect of electricity “too cheap to meter”Footnote 57—saw the emergence of the technological internationalism of Zaidi’s title. He tracks the emergence of a loose consensus that the new “machine age” required international governance through technical expertise.
Many early proposals for notionally global security forces were in fact premised on specific countries’ armed services—notably one’s own—operating under an international flag (Zaidi, pp. 59–60). The Economist of the 1930s was wary of French ambitions in particular, but concluded that the case for internationalization was “overwhelming” as “[f]lying is supra-national in its very nature.”Footnote 58 Discussion about the internationalization of aviation became more mainstream in the course of that decade, including when popularized in H.G. Wells’s book and later film The Shape of Things to Come.Footnote 59
The horrors of World War II led to the zenith of such aspirations, with plans for a post-war order including widespread discussion of an internationalized air force. Indeed, the 1944 meeting of the American Society of International Law featured detailed discussion of the merits of such a force, which could provide “an effective spearpoint for police action capable of immediate use by the international council at any point where aggression might occur or be threatened.”Footnote 60 By the Dumbarton Oaks Conference some months later, however, enthusiasm for a truly international or exclusively aerial combat force had waned. The Soviet delegation raised the possibility of such an entity, but the compromise was that national contingents would be made available to the proposed Security Council “on its call and in accordance with a special agreement or agreements.”Footnote 61 This was later formalized as Article 43 of the UN Charter, with a commitment that the specifics of the “numbers and types of forces, their degree of readiness and general location” would be negotiated “as soon as possible.”Footnote 62 Despite several enforcement actions over the succeeding decades, not a single agreement as envisaged by the Charter has yet been concluded.Footnote 63
The longevity of the technological internationalist view, at least among a certain class of intellectuals, can be seen in proposals that the latest technology offering tremendous benefits alongside real risks—AI—should also be governed in some measure by a cadre of global experts. The analogy with nuclear energy and the atomic bomb in particular is well-worn, having been embraced by academics,Footnote 64 leaders of technology companies,Footnote 65 and the secretary-general of the United Nations himself.Footnote 66
The limits of that analogy are obvious. Nuclear energy comprises a well-defined set of techniques using specific materials that are unevenly distributed around the world. AI, by contrast, is an amorphous term whose applications are extremely wide and difficult to contain. Atomic bombs are expensive to build and difficult to hide; weaponized AI promises to be neither.
Still larger problems may be political and structural. Politically, there is no appetite for anything remotely as elaborate as the International Atomic Energy Agency (IAEA) for AI.Footnote 67 Despite early enthusiasm on the part of the secretary-general,Footnote 68 the Global Digital Compact, adopted by member states in late 2024, did not even mention an agency, calling instead for the UN secretary-general to submit a proposal for an office that would draw upon existing resources to “facilitate system-wide coordination” in relation to AI.Footnote 69
Structurally, international organizations like the UN are often ill-suited to—and often vehemently opposed to—the direct participation of private sector actors. In March 2024, for example, the General Assembly adopted its first ever resolution on regulating AI. The non-binding document calls on member states “and, where applicable, other stakeholders” not to use AI systems that pose undue risks to the enjoyment of human rights.Footnote 70 A few paragraphs later, the Assembly “encourages the private sector to adhere to applicable international and domestic laws.”Footnote 71
Once more, we confront a mismatch between the interest in regulating and the political mechanisms that might enable it. Zaidi concludes his book with the observation that technology through the ages has tapped into our existential hopes and fears, often emerging as “carriers of our dreams and nightmares” (Zaidi, p. 247). With regard to AI, at least, waking up to address this latest challenge requires more than waiting for the United States to assert its role as a shining city on a hill, or hoping that the United Nations will somehow save humanity from its silicon creations.Footnote 72
III. Reining in the Digital Robber Barons
Where Harding and Zaidi limit themselves to technological changes within living memory of many people, Acemoglu and Johnson take a more expansive view, aiming to encompass a millennium of progress in around half a thousand pages. For the paperback edition of their work, the two MIT professors—who were among the 2024 Nobel Prize winners not connected to or relying on AIFootnote 73—summarize a key finding as being that “really bad outcomes are possible when deluded technology leaders are able to impose their messianic visions on society” (Acemoglu & Johnson, p. xiii).
None of the books discussed in this essay are entirely pessimistic about AI. Yet they are all deeply suspicious of the claims of techno-utopians that AI will yield nothing but benefits. The venture capitalist Marc Andreessen epitomized this Panglossian view in a document literally called “The Techno-Optimist Manifesto,” asserting that the productivity boost from technologies such as AI “drives wages up, not down. This is perhaps the most counterintuitive idea in all of economics, but it’s true, and we have 300 years of history that prove it.”Footnote 74 (It is not true, and he does not prove it.) Amazon’s Jeff Bezos pushed this into caricature in his final letter to shareholders as chief executive officer. Trying to head off criticisms of worker treatment that had led to fights over unionization and concerns about worker safety, Bezos promised a new commitment to “a better vision for our employees’ success.”Footnote 75 The form this vision would take could have been lifted from Brave New World, including “new automated staffing schedules that use sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion.”Footnote 76
It is true that most of us are materially better off than our ancestors. The reason is only partly technology, however. Acemoglu and Johnson argue that it is also because “citizens and workers in early industrial societies organized, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably” (Acemoglu & Johnson, p. 7).
There are clear echoes of Harding here, but Acemoglu and Johnson are calling for something more organized and strategic. They also interrogate the idea that technological progress is always economically progressive. On the contrary, they argue, the last thousand years saw serial fights over the direction of technology and the type of progress, with accompanying winners and losers (Acemoglu & Johnson, p. 34). Their main target is the “productivity bandwagon”—the assumption that new machines that increase productivity will also increase wages and benefit everyone, not merely the entrepreneurs and owners of capital (Acemoglu & Johnson, p. 14). The book considers the rise of modern agriculture, the industrial revolution, and the emergence of digital technologies. Each of these periods saw a tension between who led change and who benefited from it.
The main battleground is domestic, including traditional political institutions and organized labor.Footnote 77 A signal inflection point has been the move away from technology that created new tasks and opportunities to the automation of work and cutting labor costs, which they attribute to “lack of input and pressure from workers, labor organizations, and government regulation” (Acemoglu & Johnson, p. 37). In some ways, it is ironic that digital technology played such a role in this context. Echoing Harding again, they note that the early days of the computer revolution were defined by decentralization and freedom, bordering on anarchy (Acemoglu & Johnson, pp. 253–54). An observer might have predicted that subsequent decades would “further bolster countervailing powers against big business, create new productive tools for workers, and lay the foundations of even stronger shared prosperity” (Acemoglu & Johnson, p. 255).
Instead, digital technologies became “the graveyard of shared prosperity” (Acemoglu & Johnson, p. 255). Though other factors were at play—notably globalization and the weakening of the labor movement in the United States in particular—the change in the direction of technology from the 1970s onward was decisive. Much of the energy and creativity has gone into replacing humans through automation and surveilling those workers who remain (Acemoglu & Johnson, pp. 297–338). The reason for that trend appears to have less to do with the evilness of corporate titans than the abdication of public officials. Left unfettered, the incentives embedded within contemporary capitalism encourage labor-saving technologies that offer immediate cost reductions and enhanced managerial control. The diffuse and longer-term benefits of complementing human work are harder to monetize and rarely rewarded by financial markets. Combined with diminished countervailing pressures from unions or organized civil society, innovation is steered towards displacing workers rather than empowering them, entrenching a model of progress that concentrates gains in the hands of owners and investors.
In opposition to machine intelligence, Acemoglu and Johnson encourage the pursuit of machine usefulness, meaning the search for ways in which AI could better complement human workers, rather than merely replacing them (Acemoglu & Johnson, pp. 327–32). Their call to action can be vague at times: “altering the narrative, building countervailing powers, and developing technical, regulatory, and policy solutions to tackle specific aspects of technology’s social bias” (Acemoglu & Johnson, p. 38). The final chapter does offer a raft of policy reforms, but their central message is that passively accepting the social costs of technology is not the only option. The Gilded Age of the late nineteenth century also saw periods of rapid technological change—railways and oil, steel and finance—along with rent-seeking from the robber barons of the era. Checking the massive inequality that followed depended on action by civil society, notably journalists (the original muckrakers) and organized labor, driving a political movement that in turn led to greater regulation and efforts to rein in the power of corporate titans through antitrust and campaign finance reform (Acemoglu & Johnson, pp. 383–86).
Unfortunately, such institutions barely exist today. Although there is impressive reporting on the power of the modern digital robber barons, the decline of journalism as an industry has been exacerbated by the rise of disinformation supercharged by generative AI.Footnote 78 “Democracy dies in darkness” (Acemoglu & Johnson, p. 352), Acemoglu and Johnson note, alluding to the mission statement adopted by the Washington Post at the start of President Donald Trump’s first term in office.Footnote 79 “But it also struggles under the light provided by modern artificial intelligence” (Acemoglu & Johnson, p. 352). It is a telling indicator of the impact of market forces that the Post more recently abandoned that aspirational slogan for the less confrontational and market-friendly “Riveting Storytelling for All of America.”Footnote 80
With respect to organized labor, the marginalization of the working class more generally has been linked to the rise of authoritarian politics in the United States and elsewhere.Footnote 81 A rare example of unions mobilizing workers to limit the impact of AI came from the very elites typically derided by populist politicians: Hollywood. The Writers Guild of America strike in 2023 sought, among other things, to limit the use of AI to helping with research or facilitating scriptwriting—rather than replacing writers.Footnote 82 The strike gained extraordinary publicity in part because it naturally impacted the media landscape, but also because workers effectively exercising power in the United States—asserting and claiming leverage with respect to their corporate masters—is very much the exception rather than the norm.
Acemoglu and Johnson’s other prescriptions include measures to alter the incentives for technology companies. Taxes currently encourage automation, for example. In the United States, a company investing in automation equipment or software pays a fifth of the tax it would face if hiring workers to perform the same tasks (Acemoglu & Johnson, p. 406). They do not support an automation tax, but suggest limiting such incentives, along with denying patent protection for surveillance technologies (Acemoglu & Johnson, p. 403).
A larger problem of incentives is the extent to which revenue streams for tech companies currently rely on advertising as opposed to, say, subscriptions. Subscription models encourage companies to curate quality experiences for their users; advertising generally rewards the quantity of engagement. Few users are willing to pay for content, however, giving rise to the model of funding that content by monetizing the personal data of those users. That model was summed up in the pithy phrase that “if something is free, you are the product,”Footnote 83 later termed “surveillance capitalism” by Shoshana Zuboff.Footnote 84
As a means of shifting these incentives, Acemoglu and Johnson propose a “nontrivial digital advertising tax.” Although unable to put a price on this, they are optimistic that it would encourage “alternative business models” (Acemoglu & Johnson, pp. 413–14). In addition to subscriptions, which fund companies such as Netflix and the New York Times, those alternative models include sites like Wikipedia, which draws on the wisdom of the crowd and structured governance institutions to develop and maintain standards. More importantly, it is funded through a not-for-profit foundation (Acemoglu & Johnson, p. 379).
There are parallels, here, with the internationalist approaches running through Zaidi’s work. For all their discussion of organized labor, however, it is telling that—presumably in deference to their American audience—Acemoglu and Johnson do not bother to mention a third obvious possibility: public ownership.Footnote 85 They prefer to work within the existing regime, tweaking incentives and relying in large part on existing structures and norms.
The limits of that approach were on display in the corporate drama that unfolded in OpenAI, the company behind ChatGPT. OpenAI was established as a non-profit in 2015, heralded with lofty statements as to how this status enabled it to “benefit humanity as a whole, unconstrained by a need to generate financial return.”Footnote 86 As the costs of training the large language model that became its signature product escalated, the company announced that it would adopt a “capped-profit” structure, allowing it “to rapidly increase our investments in compute and talent.”Footnote 87 The contradiction between these two worldviews unfolded in the spectacle of the not-for-profit Board firing CEO Sam Altman in November 2023—only for him to be reinstated days later and the Board itself being replaced. As various commentators pointed out: “The money always wins.”Footnote 88
IV. The Tech-Industrial Complex
So, what is to be done? If companies cannot be trusted to self-regulate, if governments are unwilling to legislate, and if international organizations are unable to do more than coordinate—who or what might help mitigate the risks and more evenly distribute the benefits of AI?
Returning to Harding’s titular exhortation, the first answer is, of course, us. Users can choose not to support companies that ignore safety or exacerbate inequality. The problem is that individual users have trivially little leverage over companies whose business model is premised in part on hiding that lack of agency from consumers.Footnote 89 The tragedy of AI governance lies in that inverse relationship between leverage and interest: users have interest but no leverage; tech companies have leverage but no interest in constraining their own behavior if it means thereby limiting their profits.Footnote 90
Just as organized labor offered glimmers of hope in increasing workers’ bargaining power, organized users might have a greater say in how technology is developed and deployed. Global privacy movements, for example, shifted markets at least modestly, as reflected in the rise of privacy-by-design and personal data protection being seen by some companies as a market differentiator.Footnote 91 It is conceivable that similar norms might emerge in the AI space, perhaps along the lines of “responsible” AI that is more trustworthy and less prone to hallucinations, or more “open” in the sense of greater transparency as to how decisions are made and how models are trained.Footnote 92
Another form of transparency would be as to the costs of AI, notably its environmental impact.Footnote 93 Various tech companies—and some countries—have announced that their investments in AI mean that they are giving up on climate targets, though they have largely refrained from passing on these costs to consumers.Footnote 94 More information about the costs of AI, either through moves to subscription models, or at least revealing the electricity and water consumed when using the latest AI systems, might influence user and therefore corporate behavior.
Market mechanisms will not be enough, however. Days before he left office, President Joe Biden spoke of the emergence of a “tech-industrial complex,”Footnote 95 echoing Dwight Eisenhower’s own valedictory address on the “military-industrial complex.”Footnote 96 While Eisenhower was concerned about the influence of the arms industry on military procurement and defense policy,Footnote 97 Biden warned that “an oligarchy is taking shape in America of extreme wealth, power and influence that literally threatens our entire democracy.”Footnote 98
In the wake of the global financial crisis of 2007–2008, one of the lessons learned was that if certain banks were “too big to fail,” then it meant that they were too big in the first place.Footnote 99 Echoing earlier battles from the Gilded Age, there is a strong argument that tech companies—or tech entrepreneurs—that are too big to regulate are too big, period. There have, of course, been efforts to break up those companies. The U.S. Justice Department is currently suing GoogleFootnote 100 and Apple,Footnote 101 while the Federal Trade Commission has ongoing actions against Amazon,Footnote 102 having unsuccessfully brought actions against MicrosoftFootnote 103 and Meta.Footnote 104 In addition to its own antitrust actions,Footnote 105 the European Union has linked size with more elaborate obligations and reporting requirements for “gatekeepers” under the Digital Markets ActFootnote 106 and “very large” online platforms and search engines under the Digital Services Act.Footnote 107 Only China, however, has successfully broken up tech companies in a purge lasting from 2020 to 2023, wiping trillions of dollars off the share value of those companies,Footnote 108 with Alibaba divided into six new entities.Footnote 109 These were costs that Beijing was willing to bear, but at which Washington or Brussels might balk, particularly given President Trump’s new chumminess with the tech elite.
Returning to the example of the East India Company,Footnote 110 an alternative to divestiture is nationalization. In the wake of the 1857 Indian Rebellion, Britain concluded that leaving such economic and military power in private hands was untenable. The Government of India Act 1858 dissolved the Company, transferred all its territories, armies, and administrative machinery to the Crown, and replaced its directors with a Secretary of State for India, directly answerable to Parliament.Footnote 111 It is conceivable that a similar fate could befall today’s most powerful AI companies—if their systems became so essential to national security or economic stability that government oversight proved insufficient, with states moving from regulation to outright control, treating AI infrastructure as public utilities or national assets. For the time being, there is no such appetite for confrontation with the technology companies, a timidity reinforced by the fear of holding back innovation or falling behind geopolitical rivals, or the more mundane concerns of running for political office in a social media age.Footnote 112
As for international institutions, another Eisenhower speech from the very first year of his presidency suggests the possibilities and limitations. By 1953, the technological internationalist moment had passed and the prospect of international control of nuclear weapons faded. Eisenhower proposed an alternative in his “Atoms for Peace” address to the UN. If the earlier idea had been utopian, this was idealistic in a different way: instead of concentrating nuclear materials and expertise in a supranational body, they would be disseminated across the globe—encouraging states to use them for peaceful purposes, in exchange for commitments to renounce the search for the bomb.Footnote 113
The analytical, political, and structural limits of this analogy have been touched on earlier.Footnote 114 The biggest difference between attempts to control nuclear power in the 1950s and AI today, however, may be the historical context. For even as Eisenhower spoke in New York, the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt.Footnote 115 The “dread secret” of those weapons, he warned, was no longer confined to the United States and no longer containable. To do nothing was to accept the hopeless finality that “two atomic colossi are doomed malevolently to eye each other indefinitely across a trembling world.”Footnote 116
There is, at present, no such threat from AI—nor is there comparably visceral evidence of its potential for harm. It is possible that concerns are overblown. Or, as some would argue, AI itself may help solve these and sundry other problems.Footnote 117 If not, however—if the unchecked power of tech companies and their silicon sovereigns is unable to be constrained or contained by users, the market, or states in thrall to the new mammon—then global institutions that might have helped to prevent the first true AI emergency may be too late to avert the second.