To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter argues that to understand cooperation and conflict in large-scale societies we need to blend these ideas with a systematic study of within-society conflict and the institutions and norms that structure these relations.
Chapter 1 introduces basic terminology. Terms such as artificial intelligence, data, algorithm, machine learning, neural networks, deep learning, large language models, generative AI and symbolic AI are presented to develop a sense of what AI is, how it has evolved, and what it does. This chapter also introduces some of the major conceptual disagreements in the field. Different ideas about how to develop AI in the best way drive disagreements, as well as philosophical differences over what intelligence means and whether machines can develop human-like intelligence.
Chapter 6 is about laws as binding mechanisms to eliminate or mitigate AI risks and harms. Most countries have AI-promotion strategies that devote little or no attention to potential problems. The number of bills proposed in national legislatures to address those problems has been increasing since the late 2010s, but only the European Union and South Korea have thus far legislated laws regulating AI. Despite the absence of AI-centric lawmaking, however, some trends are emerging. First, AI regulation has been taking place, to a limited extent, in AI-adjacent realms such as data privacy and protection, consumer rights, antitrust, and children’s protection. Second, the European Union’s AI Act has set the trend for risk-based, future-proof and technology-neutral legislation that will likely be followed by other countries. Third, the absence of national legislation in the United States, home to most cutting-edge AI technologies from the 1990s to the early 2020s, has led states and cities to launch legal initiatives. And finally, even the successful passage of a law does not address all AI risks and harms – lawmakers’ omission of military AI as an area of regulation is a case in point.
This chapter describes the successful application of advances in practical truthful mechanisms design to a large-scale computationally hard problem: The FCC’s 2016–2017 incentive auction, which reallocated tens of billions of dollars of radio spectrum resources from use in television broadcasting to higher-value uses in mobile broadband. The mechanism used was an impressive combination of advances in efficiently solving NP-hard resource allocation problems (in most cases) and in new mechanism design that is simple to implement and that adapts well to limited computation capacity. The auction resulted in repurposing 84 megahertz of spectrum and yielded $19.8 billion in revenue.
Chapter 5 addresses business self-regulation as an AI governance model. Voluntary AI principles and codes of conduct have risen to prominence in the absence of AI laws since the mid 2010s. Numerous large companies have established internal or external advisory boards or councils and responsible AI teams to hold themselves accountable. The evidence on these self-regulatory bodies is mixed: Journalistic reports suggest improvements in business conduct in a number of cases, but one cannot ignore the fact that none of the boards, councils, or teams can force businesses to respect their decisions or suggestions. What is worse, some powerful AI companies have ignored calls to create self-regulatory institutions or disbanded them at the first sight of friction.
This chapter introduces the three contributions that constitute Part VIII, “Human Society.” They try to understand (1) how different institutions emerge as a result of within-society conflict, and how social and political innovations develop in order to deal with this. (2) The role of sanctions and enforcement, and strategies not just to play the game that society presents us with but to change the game itself. (3) The role of equilibrium selection, endogenous preferences, and the evolution of culture in which the economy is inevitably embedded.
The equilibrium notion of Nash has been the primary tool for predicting strategies and outcomes of games with rational players. But the Nash equilibrium is a weak criterion for games with dynamic interactions and/or private information among the players. Stronger criteria called equilibrium refinements are intended to remedy deficiencies that stem from these features. This chapter summarizes motives for refinements, the main refinements themselves, and reports progress on characterizing the strongest refinement, called stability, via axioms that express basic properties of rational behavior.
Since ordinals will be key throught the book, we develop them from scratch, defining them as equivalence classes of well-orderings. We concentrate on the countable ordinals. We introduce basic ordinal operations like exponentiation.
This chapter is concerned with multiwinner elections, an emerging topic in the area of computational social choice. Much of the classic literature in social choice theory deals with functions that map ordinal preferences over candidates to a winning candidate or perhaps a ranking of the candidates. The goal of multiwinner elections is to select a fixed-size set of candidates: a committee. This gives rise to new rules as well as new axioms. The chapter focuses on the case of approval-based preferences and axioms capturing the idea of proportional representation.
The literature on learning in games interprets equilibrium strategy profiles as the long-run average behavior of agents who are selected at random to play the game. As suggested by Nash, in normal-form games we expect that as the agents accumulate evidence about play of the game they will develop accurate beliefs, so that the stationary points of the process correspond to the Nash equilibria. The definition of Nash equilibrium applies unchanged to games in extensive form, but the learning foundation for it does change, because in games with a nontrivial extensive form simply playing the game repeatedly may not lead agents to know how their opponents would respond to deviations that the agents have not tried. Thus there is no reason to expect learning by myopic agents to lead to Nash equilibrium in general games, as agents may not experiment enough to learn the consequences of deviating from the equilibrium path. Instead, learning is consistent with self-confirming equilibrium, introduced by Fudenberg and Levine in the early 1990s. The focus here is on settings where the agents are patient, so they do have an incentive to experiment. In this case, Nash’s mass action interpretation of equilibrium is again valid. But extensive-form games typically have many equilibria, and not all of them seem equally plausible. An advantage of the learning approach is that some actions that are off-path according to the limiting equilibrium distribution are not counterfactual, but will actually be played by young agents as “experiments,” so that equilibrium refinements can be derived from properties of optimal experimentation.