To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the winter of 2021, the Swedish Nobel Foundation organized a Nobel symposium 'One Hundred Years of Game Theory' to commemorate the publication of famous mathematician Emile Borel's 'La théorie du jeu et les équations intégrales à noyau symétrique'. The symposium gathered roughly forty of the world's most prominent scholars ranging from mathematical foundations to applications in economics, political science, computer science, biology, sociology, and other fields. One Hundred Years of Game Theory brings together their writings to summarize and put in perspective the main achievements of game theory in the last one hundred years. They address past achievements, taking stock of what has been accomplished and contemplating potential future developments and challenges. Offering cross-disciplinary discussions between eminent researchers including five Nobel laureates, one Fields medalist and two Gödel prize winners, the contributors provide a fascinating landscape of game theory and its wide range of applications.
Under what conditions do the behaviors of players, who play a game repeatedly, converge to a Nash equilibrium? If one assumes that the players’ behavior is a discrete-time or continuous-time rule whereby the current mixed strategy profile is mapped to the next, this becomes a problem in the theory of dynamical systems. We apply this theory, and in particular the concepts of chain recurrence, attractors, and the Conley index, to prove a general impossibility result: There exist games for which any dynamics will fail to converge, from certain initial conditions, to the set of Nash equilibria. The games which help prove this impossibility result are degenerate, but we conjecture that an analogous result holds, under complexity assumptions, for nondegenerate games. We also prove a stronger result for approximate Nash equilibria: For a set of games of positive measure, there are no game dynamics that converge to the set of approximate Nash equilibria for some substantial approximation bound. These impossibility results also apply to dynamics with memory. We argue that these results further weaken the appeal of the Nash equilibrium as the solution concept of choice in game theory, and discuss alternatives suggested by the dynamics point of view.
This chapter gives an extensive overview of techniques and algorithms for representing and solving large imperfect-information extensive-form games and reports on recent breakthroughs that have been achieved for the game of poker. These breakthroughs were made possible by advances in three key areas: (1) game abstraction (i.e., the systematic construction of significantly smaller extensive-form games that are strategically similar to the original game), (2) equilibrium-finding algorithms, and (3) solving subgames during game play in much finer abstractions than would be possible in advance. A new proposal put forward is to reason about games whose rules are modeled via a programming language.
This chapter introduces the six contributions in Part IV, “Computer Science.” The main focus is on topics in algorithmic game theory, algorithmic mechanism design, and computational social choice.
This chapter shows how the theory of symmetric two-player zero-sum games, which was initiated by Borel in 1921, can be used for randomly selecting an alternative based on quantified pairwise comparisons between alternatives. It points out desirable properties satisfied by the equilibrium distribution and gives examples where these distributions arise as the limit of simple dynamic processes that have been studied across various disciplines, such as population biology, quantum physics, and machine learning.
A pervasive assumption in game theory is that players’ utilities are concave, or at least quasiconcave, with respect to their own strategies. While mathematically instrumental, enabling the existence of many kinds of equilibria in many kinds of settings, (quasi)concavity of payoffs is too restrictive an assumption. For the same reasons that (quasi)concave utilities can only go so far in capturing single-agent optimization problems, they can only go so far in modeling the considerations of an agent in a strategic interaction. Besides, the study of games with nonconcave utilities is increasingly coming to the fore as deep learning ventures into multiagent learning applications. This chapter studies whcih types of equilibria exist in such games, and whether they are computationally tractable, proposing paths for game theory and multiagent learning in the next 100 years.
This chapter describes the successful application of advances in practical truthful mechanisms design to a large-scale computationally hard problem: The FCC’s 2016–2017 incentive auction, which reallocated tens of billions of dollars of radio spectrum resources from use in television broadcasting to higher-value uses in mobile broadband. The mechanism used was an impressive combination of advances in efficiently solving NP-hard resource allocation problems (in most cases) and in new mechanism design that is simple to implement and that adapts well to limited computation capacity. The auction resulted in repurposing 84 megahertz of spectrum and yielded $19.8 billion in revenue.
This chapter is concerned with multiwinner elections, an emerging topic in the area of computational social choice. Much of the classic literature in social choice theory deals with functions that map ordinal preferences over candidates to a winning candidate or perhaps a ranking of the candidates. The goal of multiwinner elections is to select a fixed-size set of candidates: a committee. This gives rise to new rules as well as new axioms. The chapter focuses on the case of approval-based preferences and axioms capturing the idea of proportional representation.
Chapter 13 presents the second application of MATLAB to behavioral sciences: data analysis. Students review previously-learned data structures often encountered in practice before applying their programming knowledge from Chapters 1 to 11 to manage each. Starting with tabular data, tables from Chapter 8 are reviewed, with students learning common data science tasks for managing one or more tabular data sets, before applying their knowledge to real experimental data. Next, hierarchical data are reviewed, connecting students’ knowledge of structure arrays from Chapter 8 to a popular internet-based data format (JSON), with students applying their newfound knowledge to analyze data on the behavior of European monarchs.
Chapter 4 introduces students to logical values, a simple data type that can only take values of one and zero. While simple, logical values are essential components of program flow (conditionals, loops) which they will learn next, so mastery of them is essential before tackling those more difficult tools. Logical values can also be used to subset arrays according to their values, making them critical for complex data management tasks. Students new to programming are often unfamiliar with operations that create logical values, or which operate on logical values, so this chapter provides detailed explanations and examples to familiarize students with this new and valuable data type.
Chapter 12 presents the first application of MATLAB to behavioral sciences: modeling behavioral phenomena using MATLAB. Students learn basic computational modeling principles before applying their programming knowledge from Chapters 1 to 11 to model two types of behavior. First, classical conditioning is modeled using the Rescorla-Wagner model, which is used to make predictions about how an organism will react to multiple stimuli when presented together, such as in the classic case of Pavlov’s dog who was trained to salivate to the sound of a bell. Next, foraging behavior in animals is modeled, wherein agents forage for food on patches of resources, learning from experience when to exploit their current patch or explore in search of more food.
Chapter 3 introduces students to functions, which rapidly expands what they can do with MATLAB. For new programming students, this section begins with the underlying computer science principles of inputs and outputs, drawing connections to the same concepts in math. MATLAB functions are highly flexible in how they handle inputs and produce outputs, and this chapter explains those nuances in detail, using several often-used functions as examples. MATLAB also has special function features including multiple outputs and a unique syntax for functions which use text; as with input flexibility, the textbook explains their use and provides often-used example functions.
Chapter 8 further develops students’ understanding of data structures by introducing several new ones. Higher-dimensional arrays are generalizations of the arrays they learned in Chapter 2, but can be hard to visualize because we live in a three-dimensional world, and this chapter includes several suggestions for managing higher-dimensional data. Cell arrays and structure arrays can store any data type, and students learn the unique syntaxes needed to manage this flexibility. Tables store spreadsheet-type data, which students are introduced to formally, while also learning the many indexing and display features that make this data type critical for data analysis. Multiple function outputs from Chapter 3 are introduced as their own data type with additional tricks for managing them. The chapter concludes with general tools for learning the structure of any MATLAB data type and the methods available for using it.
Chapter 5 introduces perhaps the most difficult tools for new programmers to grasp, namely conditionals and loops, which control when, whether, and how often code executes. Presentation of conditionals and loops begins with descriptions of what they do: conditionals control whether code runs, or which piece of code runs from multiple options; while loops make code run repeatedly, either “while” some condition is met (“while” loops) or “for” each element of a set of values (“for” loops). This presentation makes clear to students how to translate these descriptions into MATLAB code, allowing them to see the correspondences between the broader programming task and the tools that accomplish them. Syntax is explored in detail, including common pitfalls and specification errors. Lastly, syntax for interrupting loops and changing their operation is presented, furthering students’ mastery of these critical tools.
Chapter 1 introduces students to MATLAB. Beginning with basic computer science concepts, the student is introduced to how computers work generally, and then to the MATLAB interface itself, instructing students on key buttons, tabs, and windows that they will use virtually every time they open MATLAB. The structure of writing code is introduced, wherein code can be tested in the console (Command Window in MATLAB) but ultimately should be stored and saved in scripts for later re-use. Students learn the basic variable operations: creation and modification (assignment) and later use (reference), and are introduced to the utility of these ubiquitous programming tools, Basic script structure and formatting is introduced, including how to create, modify, and use variables and add comments to their code. The chapter concludes with how to obtain help and access documentation from within MATLAB.
Chapter 14 presents the third application of MATLAB to behavioral sciences: conducting computerized experiments. Students learn basic experimental design before applying their programming knowledge from Chapters 1 to 11 to develop three experiments: the Flanker and N-Back tasks, and the Monty Hall probability puzzle. In the final, longest section of this chapter, students learn how to use the MATLAB add-on Psychtoolbox, which allows full control of and interaction with the screen, keyboard, mouse, and audio systems. The three experiments from earlier in the chapter are extended to incorporate Psychtoolbox functionality, and a new experiment, the Stroop task, concludes the chapter.
Chapter 9 returns to computer science concepts from Chapter 1 and prepares students for subsequent chapters with detailed instructions for loading and saving data. A crucial component is an understanding of paths, which determine where information is stored on a computer and how to access it when needed. MATLAB has many methods for working with paths, many of which work predictably by default, and each is discussed in turn with its costs and benefits, with emphasis on the easiest way to work with multiple files: storing them all in the same place. Equipped with knowledge of data structures from Chapters 2, 4 and 8, this chapter discusses how to load and save each type of data, both in the native MATLAB data format and in formats other programs use or produce.
Chapter 2 introduces students to the basic building blocks of MATLAB data: arrays. Types of arrays are disambiguated based on their structure into scalars, which are individual values students are familiar with from mathematics, and vectors and matrices, which are often new to students. These concepts are introduced as arrangements of numbers: larger arrays are built from smaller arrays via concatenation, and can be subsetted into smaller arrays via indexing. Importantly, no linear algebra is taught or required in this section, as most behavioral science students do not come in with this knowledge, nor do most applications require it. Instead, the emphasis is on arithmetic operations students will be using most often.
Chapter 11 teaches students how to make graphs in MATLAB. Specifically, students learn to make line graphs, scatterplots, bar graphs, and histograms, four basic and essential visualizations for anyone interested in presenting data. New data types are needed, building on knowledge from Chapter 2 and Chapter 8 to reinforce understanding, and new aspects of the MATLAB interface for graphs are introduced in detail. MATLAB has many ways to customize graphs, each of which is reviewed in turn, along with its strengths and weaknesses. To help students make the most of MATLAB graphics infrastructure, its hierarchical structure is explained in detail, allowing students to modify any graphics feature, using any other graphics feature as a starting point. Critically, this chapter elaborates on how to use MATLAB documentation to identify and specify the graphics features they want to modify, while providing many examples of such modification using different syntax.
Chapter 10 expands on the discussion of functions from Chapter 3, teaching students how to write their own functions. MATLAB has many types of functions and methods for accessing and storing functions, and each is discussed in turn, starting from the simplest and concluding with compositions of multiple functions. Knowing how to ensure MATLAB uses user-defined functions is essential, and knowledge of loading and saving data from Chapter 9 is reiterated and expanded upon to ensure MATLAB can use the functions students write. Returning again to Chapter 3, this chapter teaches students how to incorporate the flexibility of MATLAB function syntax into their own code.