To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The objective of cooperative game theory is to study ways to enforce and sustain cooperation among agents willing to cooperate. A central question in this field is how the benefits (or costs) of a joint effort can be divided among participants, taking into account individual and group incentives, as well as various fairness properties.
In this chapter, we define basic concepts and review some of the classical results in the cooperative game theory literature. Our focus is on games that are based on combinatorial optimization problems such as facility location. We define the notion of cost sharing, and explore various incentive and fairness properties cost-sharing methods are often expected to satisfy. We show how cost-sharing methods satisfying a certain property termed cross-monotonicity can be used to design mechanisms that are robust against collusion, and study the algorithmic question of designing cross-monotonic cost-sharing schemes for combinatorial optimization games. Interestingly, this problem is closely related to linear-programming-based techniques developed in the field of approximation algorithms. We explore this connection, and explain a general method for designing cross-monotonic cost-sharing schemes, as well as a technique for proving impossibility bounds on such schemes. We will also discuss an axiomatic approach to characterize two widely applicable solution concepts: the Shapley value for cooperative games, and the Nash bargaining solution for a more restricted framework for surplus sharing.
The flow of information or influence through a large social network can be thought of as unfolding with the dynamics of an epidemic: as individuals become aware of new ideas, technologies, fads, rumors, or gossip, they have the potential to pass them on to their friends and colleagues, causing the resulting behavior to cascade through the network.
We consider a collection of probabilistic and game-theoretic models for such phenomena proposed in the mathematical social sciences, as well as recent algorithmic work on the problem by computer scientists. Building on this, we discuss the implications of cascading behavior in a number of online settings, including word-of-mouth effects (also known as “viral marketing”) in the success of new products, and the influence of social networks in the growth of online communities.
Introduction
The process by which new ideas and new behaviors spread through a population has long been a fundamental question in the social sciences. New religious beliefs or political movements; shifts in society that lead to greater tolerance or greater polarization; the adoption of new technological, medical, or agricultural innovations; the sudden success of a new product; the rise to prominence of a celebrity or political candidate; the emergence of bubbles in financial markets and their subsequent implosion – these phenomena all share some important qualitative properties.
By
Ross Anderson, Computer Laboratory University of Cambridge,
Tyler Moore, Computer Laboratory University of Cambridge,
Shishir Nagaraja, Computer Laboratory University of Cambridge,
Andy Ozment, Computer Laboratory University of Cambridge
Many interesting and important new applications of game theory have been discovered over the past 7 years in the context of research into the economics of information security. Many systems fail not ultimately for technical reasons but because incentives are wrong. For example, the people who guard a system often are not the people who suffer the full costs of failure, and as a result they make less effort than would be socially optimal. Some aspects of information security are public goods, like clean air or water; externalities often decide which security products succeed in the marketplace; and some information risks are not insurable because they are correlated in ways that cause insurance markets to fail.
Deeper applications of game-theoretic ideas can be found in the games of incomplete information that occur when critical information, such as about software quality or defender efforts, is hidden from some principals. An interesting application lies in the analysis of distributed system architectures; it took several years of experimentation for designers of peer-to-peer systems to understand incentive issues that we can now analyze reasonably well. Evolutionary game theory has recently allowed us to tie together a number of ideas from network analysis and elsewhere to explain why basing peer-to-peer systems on rings is a bad idea, and why revolutionaries use cells instead. The economics of distributed systems looks like being a very fruitful field of research.
Large computer networks such as the Internet are built, operated, and used by a large number of diverse and competitive entities. In light of these competing forces, it is surprising how efficient these networks are. An exciting challenge in the area of algorithmic game theory is to understand the success of these networks in game theoretic terms: what principles of interaction lead selfish participants to form such efficient networks?
In this chapter we present a number of network formation games. We focus on simple games that have been analyzed in terms of the efficiency loss that results from selfishness. We also highlight a fundamental technique used in analyzing inefficiency in many games: the potential function method.
Introduction
The design and operation of many large computer networks, such as the Internet, are carried out by a large number of independent service providers (Autonomous Systems), all of whom seek to selfishly optimize the quality and cost of their own operation. Game theory provides a natural framework for modeling such selfish interests and the networks they generate. These models in turn facilitate a quantitative study of the trade-off between efficiency and stability in network formation. In this chapter, we consider a range of simple network formation games that model distinct ways in which selfish agents might create and evaluate networks.
By
Joan Feigenbaum, Computer Science Department Yale University,
Michael Schapira, School of Computer Science and Engineering The Hebrew University of Jerusalem,
Scott Shenker, EECS Department University of California, Berkeley
Most discussions of algorithmic mechanism design (AMD) presume the existence of a trusted center that implements the required economic mechanisms. This chapter focuses on mechanism-design problems that are inherently distributed, i.e., those in which such a trusted center cannot be used. Such problems require that the AMD paradigm be generalized to distributed algorithmic mechanism design (DAMD).
We begin this chapter by exploring the reasons that DAMD is needed and why it requires different notions of economic equilibrium and computational complexity than centralized AMD. We then consider two DAMD problems, namely distributed VCG computation and multicast cost sharing, that illustrate the concepts of ex-post Nash equilibrium and network complexity, respectively.
The archetypal example of a DAMD challenge is interdomain routing, which we treat in detail. We show that, under certain realistic and general assumptions, one can achieve incentive compatibility in a collusion-proof ex-post Nash equilibrium without payments, simply by executing the Border Gateway Protocol (BGP), which is the standard for interdomain routing in today's Internet.
Introduction
To motivate the material in this chapter, we begin with a review of why game theory is relevant to computer science. As noted in the Preface to this book, computer science has traditionally assumed the existence of a central planner who dictates the algorithms used by computational nodes. While most nodes are assumed to be obedient, some nodes may malfunction or be subverted by attackers; such byzantine nodes may act arbitrarily.
We consider some classical games and show how they can arise in the context of the Internet. We also introduce some of the basic solution concepts of game theory for studying such games, and some computational issues that arise for these concepts.
Games, Old and New
The Foreword talks about the usefulness of game theory in situations arising on the Internet. We start the present chapter by giving some classical games and showing how they can arise in the context of the Internet. At first, we appeal to the reader's intuitive notion of a “game”; this notion is formally defined in Section 1.2. For a more in-depth discussion of game theory we refer the readers to books on game theory such as Fudenberg and Tirole (1991), Mas-Colell, Whinston, and Green (1995), or Osborne and Rubinstein (1994).
The Prisoner's Dilemma
Game theory aims to model situations in which multiple participants interact or affect each other's outcomes. We start by describing what is perhaps the most well-known and well-studied game.
Example 1.1 (Prisoners' dilemma) Two prisoners are on trial for a crime and each one faces a choice of confessing to the crime or remaining silent. If they both remain silent, the authorities will not be able to prove charges against them and they will both serve a short prison term, say 2 years, for minor offenses.
By
Moshe Babaioff, School of Information University of California, Berkeley,
John Chuang, School of Information University of California, Berkeley,
Michal Feldman, School of Business Administration and the Center for the Study of Rationality Hebrew University of Jerusalem
Peer-to-peer (p2p) systems support many diverse applications, ranging from file-sharing and distributed computation to overlay routing in support of anonymity, resiliency, and scalable multimedia streaming. Yet, they all share the same basic premise of voluntary resource contribution by the participating peers. Thus, the proper design of incentives is essential to induce cooperative behavior by the peers. With the increasing prevalence of p2p systems, we have not only concrete evidence of strategic behavior in large-scale distributed systems but also a live laboratory to validate potential solutions with real user populations. In this chapter we consider theoretical and practical incentive mechanisms, based on reputation, barter, and currency, to facilitate peer cooperation, as well as mechanisms based on contracts to overcome the problem of hidden actions.
Introduction
The public release of Napster in June 1999 and Gnutella in March 2000 introduced the world to the disruptive power of peer-to-peer (p2p) networking. Tens of millions of individuals spread across the world could now self-organize and collaborate in the dissemination and sharing of music and other content, legal or otherwise. Yet, within 6 months of its public release, and long before individual users are threatened by copyright infringement lawsuits, the Gnutella network saw two thirds of its users free-riding, i.e., downloading files from the network without uploading any in return.
Given the large-scale, high-turnover, and relative anonymity of the p2p file-sharing networks, most p2p transactions are one-shot interactions between strangers that will never meet again in the future.
Many situations involve repeatedly making decisions in an uncertain environment: for instance, deciding what route to drive to work each day, or repeated play of a game against an opponent with an unknown strategy. In this chapter we describe learning algorithms with strong guarantees for settings of this type, along with connections to game-theoretic equilibria when all players in a system are simultaneously adapting in such a manner.
We begin by presenting algorithms for repeated play of a matrix game with the guarantee that against any opponent, they will perform nearly as well as the best fixed action in hindsight (also called the problem of combining expert advice or minimizing external regret). In a zero-sum game, such algorithms are guaranteed to approach or exceed the minimax value of the game, and even provide a simple proof of the minimax theorem. We then turn to algorithms that minimize an even stronger form of regret, known as internal or swap regret. We present a general reduction showing how to convert any algorithm for minimizing external regret to one that minimizes this stronger form of regret as well. Internal regret is important because when all players in a game minimize this stronger type of regret, the empirical distribution of play is known to converge to correlated equilibrium.
By
Yevgeniy Dodis, Department of Computer Science Courant Institute of Mathematical Sciences, New York University,
Tal Rabin, T. J. Watson Research Center IBM
The Cryptographic and Game Theory worlds seem to have an intersection in that they both deal with an interaction between mutually distrustful parties which has some end result. In the cryptographic setting the multiparty interaction takes the shape of a set of parties communicating for the purpose of evaluating a function on their inputs, where each party receives at the end some output of the computation. In the game theoretic setting, parties interact in a game that guarantees some payoff for the participants according to the joint actions of all the parties, while the parties wish to maximize their own payoff. In the past few years the relationship between these two areas has been investigated with the hope of having cross fertilization and synergy. In this chapter we describe the two areas, the similarities and differences, and some of the new results stemming from their interaction.
The first and second section will describe the cryptographic and the game theory settings (respectively). In the third section we contrast the two settings, and in the last sections we detail some of the existing results.
Cryptographic Notions and Settings
Cryptography is a vast subject requiring its own book. Therefore, in the following we will give only a high-level overview of the problem of Multi-Party Computation (MPC), ignoring most of the lower-level details and concentrating only on aspects relevant to Game Theory.
By
Eric Friedman, School of Operations Research and Information Engineering Cornell University,
Paul Resnick, School of Information University of Michigan,
Rahul Sami, School of Information University of Michigan
This chapter is an overview of the design and analysis of reputation systems for strategic users. We consider three specific strategic threats to reputation systems: the possibility of users with poor reputations starting afresh (whitewashing); lack of effort or honesty in providing feedback; and sybil attacks, in which users create phantom feedback from fake identities to manipulate their own reputation. In each case, we present a simple analytical model that captures the essence of the strategy, and describe approaches to solving the strategic problem in the context of this model. We conclude with a discussion of open questions in this research area.
Introduction: Why Are Reputation Systems Important?
One of the major benefits of the Internet is that it enables potentially beneficial interactions, both commercial and noncommercial, between people, organizations, or computers that do not share any other common context. The actual value of an interaction, however, depends heavily on the ability and reliability of the entities involved. For example, an online shopper may obtain better or lower-cost items from remote traders, but she may also be defrauded by a low-quality product for which redress (legal or otherwise) is difficult. If each entity's history of previous interactions is made visible to potential new interaction partners, several benefits ensue. First, a history may reveal information about an entity's ability, allowing others to make choices about whether to interact with that entity, and on what terms.
Three basic operations on labelled netstructures are proposed: synchronised union, synchronised intersection and synchronised difference. The first of them is a version of known parallel composition with synchronised actions identically labelled. The operations work analogously to the ordinary union, intersection and difference on sets.It is shown that the universe of net structures with these operations is a distributive lattice and – if infinite pre/post sets of transitions are allowed – even a Boolean algebra. As a consequence, some representation theorems of this algebra are stated. The primitive objects are atomic netstructures containing one transition with at most one pre-place or post-place (but not both). A simple example of a production system constructed by making use of the operations (and its transformations) is given. Some remarks on behavioural properties of compound nets are stated, in particular, how some constructing strategies may help to infer liveness.The latter issue is limited to semantics of place/transition nets without weights on arrows and with unbounded capacity of places and is not extensively investigated, since the main objective is focused on a calculus of net structures.
J. Hromkovic et al. have given an elegant method to convert a regular expression of size n into an ε-free nondeterministic finite automaton having O(n) states and O(nlog2(n)) transitions. This method has been implemented efficiently in O(nlog2(n)) time by C. Hagenah and A. Muscholl. In this paper we extend this method to weighted regular expressions and we show that it can be achieved in O(nlog2(n)) time.
We investigate the computational structure of the biological kinship assignment problem by abstracting away all biological details that are irrelevant to computation. The computational structure depends on phenotype space, which we formally define.We illustrate this approach by exhibiting an approximation algorithm forkinship assignment in the case of the Simpson index with a priori error bound andrunning time that is polynomial in the bit size of the population, but exponential in phenotype space size.This algorithm is based on a relaxed version of the assignment problem, where fractional assignments (over the reals) are permitted.
Let G be a graph with no three independent vertices. How many edges of G can be packed with edge-disjoint copies of Kk? More specifically, let fk(n, m) be the largest integer t such that, for any graph with n vertices, m edges, and independence number 2, at least t edges can be packed with edge-disjoint copies of Kk. Turán's theorem together with Wilson's Theorem assert that if . A conjecture of Erdős states that for all plausible m. For any ε > 0, this conjecture was open even if . Generally, f_k(n,m) may be significantly smaller than . Indeed, for k=7 it is easy to show that for m ≈ 0.3n2. Nevertheless, we prove the following result. For every k≥ 3 there exists γ>0 such that if then . In the special case k=3 we obtain the reasonable bound γ ≥ 10−4. In particular, the above conjecture of Erdős holds whenever G has fewer than 0.2501n2 edges.
A family of subsets of an n-set is 2-cancellative if, for every four-tuple {A, B, C, D} of its members, A∪ B∪C=A∪ B∪ D implies C = D. This generalizes the concept of cancellative set families, defined by the property that A∪B ≠A ∪ C for A, B, C all different. The asymptotics of the maximum size of cancellative families of subsets of an n-set is known (Tolhuizen [7]). We provide a new upper bound on the size of 2-cancellative families, improving the previous bound of 20.458n to 20.42n.
We show that a random graph studied by Ioffe and Levit is an example of an inhomogeneous random graph of the type studied by Bollobás, Janson and Riordan, which enables us to give a new, and perhaps more revealing, proof of their result on a phase transition.
Let d=1≤d1≤ d2≤···.≤ dn be a non-decreasing sequence of n positive integers, whose sum is even. Let denote the set of graphs with vertex set [n]={1,2,. . .., n} in which the degree of vertex i is di. Let Gn,d be chosen uniformly at random from . Let d=(d1+d2+···.+dn)/n be the average degree. We give a condition on d under which we can show that w.h.p. the chromatic number of is Θ(d/ln d). This condition is satisfied by graphs with exponential tails as well those with power law tails.
In this paper we prove polynomial versions of the Carlson–Simpson theorem and the Graham–Rothschild theorem on parameter sets. To do so we prove a useful extension of the polynomial Hales–Jewett theorem.