To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we introduce some Christoffel–Darboux type identities for independence polynomials. As an application, we give a new proof of a theorem of Chudnovsky and Seymour, which states that the independence polynomial of a claw-free graph has only real roots. Another application is related to a conjecture of Merrifield and Simmons.
We consider distance colourings in graphs of maximum degree at most d and how excluding one fixed cycle of length ℓ affects the number of colours required as d → ∞. For vertex-colouring and t ⩾ 1, if any two distinct vertices connected by a path of at most t edges are required to be coloured differently, then a reduction by a logarithmic (in d) factor against the trivial bound O(dt) can be obtained by excluding an odd cycle length ℓ ⩾ 3t if t is odd or by excluding an even cycle length ℓ ⩾ 2t + 2. For edge-colouring and t ⩾ 2, if any two distinct edges connected by a path of fewer than t edges are required to be coloured differently, then excluding an even cycle length ℓ ⩾ 2t is sufficient for a logarithmic factor reduction. For t ⩾ 2, neither of the above statements are possible for other parity combinations of ℓ and t. These results can be considered extensions of results due to Johansson (1996) and Mahdian (2000), and are related to open problems of Alon and Mohar (2002) and Kaiser and Kang (2014).
Keller and Kindler recently established a quantitative version of the famousBenjamini–Kalai–Schramm theorem on the noise sensitivity of Boolean functions.Their result was extended to the continuous Gaussian setting by Keller, Mosseland Sen by means of a Central Limit Theorem argument. In this work we present aunified approach to these results, in both discrete and continuous settings. Theproof relies on semigroup decompositions together with a suitable cut-offargument, allowing for the efficient use of the classical hypercontractivitytool behind these results. It extends to further models of interest such asfamilies of log-concave measures and Cayley and Schreier graphs. In particularwe obtain a quantitative version of the Benjamini–Kalai–Schramm theorem for theslices of the Boolean cube.
Two graphs G1 and G2 on n vertices are said to pack if there exist injective mappings of their vertex sets into [n] such that the images of their edge sets are disjoint. A longstanding conjecture due to Bollobás and Eldridge and, independently, Catlin, asserts that if (Δ(G1) + 1)(Δ(G2) + 1) ⩽ n + 1, then G1 and G2 pack. We consider the validity of this assertion under the additional assumption that G1 or G2 has bounded codegree. In particular, we prove for all t ⩾ 2 that if G1 contains no copy of the complete bipartite graph K2,t and Δ(G1) > 17t · Δ(G2), then (Δ(G1) + 1)(Δ(G2) + 1) ⩽ n + 1 implies that G1 and G2 pack. We also provide a mild improvement if moreover G2 contains no copy of the complete tripartite graph K1,1,s, s ⩾ 1.
It is known that w.h.p. the hitting time τ2σ for the random graph process to have minimum degree 2σ coincides with the hitting time for σ edge-disjoint Hamilton cycles [4, 9, 13]. In this paper we prove an online version of this property. We show that, for a fixed integer σ ⩾ 2, if random edges of Kn are presented one by one then w.h.p. it is possible to colour the edges online with σ colours so that at time τ2σ each colour class is Hamiltonian.
We construct minor-closed addable families of graphs that are subcritical and contain all planar graphs. This contradicts (one direction of) a well-known conjecture of Noy.
The Tutte polynomial of a graph is a two-variable polynomial whose zeros and evaluations encode many interesting properties of the graph. In this article we investigate the real zeros of the Tutte polynomials of graphs, and show that they form a dense subset of certain regions of the plane. This is the first density result for the real zeros of the Tutte polynomial in a region of positive volume. Our result almost confirms a conjecture of Jackson and Sokal except for one region which is related to an open problem on flow polynomials.
Let k ⩾ 3 be an integer, hk(G) be the number of vertices of degree at least 2k in a graph G, and ℓk(G) be the number of vertices of degree at most 2k − 2 in G. Dirac and Erdős proved in 1963 that if hk(G) − ℓk(G) ⩾ k2 + 2k − 4, then G contains k vertex-disjoint cycles. For each k ⩾ 2, they also showed an infinite sequence of graphs Gk(n) with hk(Gk(n)) − ℓk(Gk(n)) = 2k − 1 such that Gk(n) does not have k disjoint cycles. Recently, the authors proved that, for k ⩾ 2, a bound of 3k is sufficient to guarantee the existence of k disjoint cycles, and presented for every k a graph G0(k) with hk(G0(k)) − ℓk(G0(k)) = 3k − 1 and no k disjoint cycles. The goal of this paper is to refine and sharpen this result. We show that the Dirac–Erdős construction is optimal in the sense that for every k ⩾ 2, there are only finitely many graphs G with hk(G) − ℓk(G) ⩾ 2k but no k disjoint cycles. In particular, every graph G with |V(G)| ⩾ 19k and hk(G) − ℓk(G) ⩾ 2k contains k disjoint cycles.
A 1993 result of Alon and Füredi gives a sharp upper bound on the number of zeros of a multivariate polynomial over an integral domain in a finite grid, in terms of the degree of the polynomial. This result was recently generalized to polynomials over an arbitrary commutative ring, assuming a certain ‘Condition (D)’ on the grid which holds vacuously when the ring is a domain. In the first half of this paper we give a further generalized Alon–Füredi theorem which provides a sharp upper bound when the degrees of the polynomial in each variable are also taken into account. This yields in particular a new proof of Alon–Füredi. We then discuss the relationship between Alon–Füredi and results of DeMillo–Lipton, Schwartz and Zippel. A direct coding theoretic interpretation of Alon–Füredi theorem and its generalization in terms of Reed–Muller-type affine variety codes is shown, which gives us the minimum Hamming distance of these codes. Then we apply the Alon–Füredi theorem to quickly recover – and sometimes strengthen – old and new results in finite geometry, including the Jamison–Brouwer–Schrijver bound on affine blocking sets. We end with a discussion of multiplicity enhancements.
Given a pair of graphs G and H, the Ramsey number R(G, H) is the smallest N such that every red–blue colouring of the edges of the complete graph KN contains a red copy of G or a blue copy of H. If a graph G is connected, it is well known and easy to show that R(G, H) ≥ (|G|−1)(χ(H)−1)+σ(H), where χ(H) is the chromatic number of H and σ(H) is the size of the smallest colour class in a χ(H)-colouring of H. A graph G is called H-good if R(G, H) = (|G|−1)(χ(H)−1)+σ(H). The notion of Ramsey goodness was introduced by Burr and Erdős in 1983 and has been extensively studied since then.
In this paper we show that if n≥ Ω(|H| log4 |H|) then every n-vertex bounded degree tree T is H-good. The dependency between n and |H| is tight up to log factors. This substantially improves a result of Erdős, Faudree, Rousseau, and Schelp from 1985, who proved that n-vertex bounded degree trees are H-good when n ≥ Ω(|H|4).
Part I (Microeconomic Fundamentals) of this book provides a succinct introduction to methods and models from microeconomics. In this chapter we introduce and define basic concepts and terminology from game theory will be used throughout the rest of the book. Game theory is the mathematical study of interactions among independent selfinterested agents or players in a game.We limit ourselves to non-cooperative game theory and refer the interested reader to more extensive treatments such as Osborne (2004) or Shoham and Leyton-Brown (2009), whose notation we share. Non-cooperative game theory focuses on situations where self-interested agents have conflicting goals. This chapter will introduce different types of games and central solution concepts, i.e., methods to predict the outcomes of a game played by rational agents. In some parts of the book we will also draw on cooperative game theory, which focuses on predicting which coalitions will form among a group of players, the joint actions that groups take, and the resulting collective payoffs. A cooperative game is a game with competition between groups of players or coalitions. However, we introduce the respective concepts later where needed, in order to keep the chapter concise.
Normal-Form Games
Let us start with a basic type of game description. In normal-form games, players’ actions are simultaneous. In other types of games, called extensive-form games, actions take place sequentially.
Definition 2.1.1 (Normal-form games) A finite normal-form game with n players can be described as a tuple (I, A, u).
• I is a finite set of n players indexed by i.
• A = A1 ×…×An, where Ai is a finite set of actions available to player i. A vector a = (a1, …, an) ∈ A is referred to as an action profile.
• u = (u1,… , un), where ui : A ↦ R is a payoff or utility function for player i.
In definition 2.1.1, finite means that there is a finite set of players and each has a finite set of strategies. Typically, a utility function maps the set of outcomes of a game to a real-valued utility or payoff. Here, the actions possible for an agent also describe the outcomes, which is why we use A in the description of a normal-form game.
Combinatorial clock auctions (CCAs) have been adopted in spectrum auctions (Bichler and Goeree, 2017) and in other applications for their simple price-update rules. There are single-stage and two-stage versions, which have been used in spectrum auctions worldwide. We devote a chapter to CCAs because of their practical relevance, as for chapter 6 on the SMRA. We will discuss both versions and a practical example of a market mechanism that includes a discussion of activity rules, which are sometimes ignored in theoretical treatments.
The Single-Stage Combinatorial Clock Auction
The single-stage combinatorial clock auction (SCCA) is easy to implement and has therefore found application in spectrum auctions and in procurement. In this section we provide a more detailed discussion of the SCCA, as introduced by Porter et al. (2003), and give an algorithmic description in algorithm 3.
Auction Process
In this type of action, prices for all items are initially zero or at a reserve price r. In every round bidders identify a package of items, or several packages, which they offer to buy at current prices. If two or more bidders demand an item then its price is increased by a fixed bid increment in the next round. This process iterates. The bids which correspond to the current ask prices are called standing bids, and a bidder is standing if she has at least one standing bid. In a simple scenario in which supply equals demand, the auction terminates and the items are allocated according to the standing bids.
If at some point there is an excess supply of at least one item and no item is overdemanded, the auctioneer determines the winners to find an allocation of items that maximizes his revenue by considering all submitted bids. If the solution displaces a standing bidder, the prices of items in the corresponding standing bids rise by the bid increment and the auction continues. The auction ends when no prices are increased and bidders finally pay their bid prices for the winning packages. We will analyze a version that uses an XOR bidding language.
The revelation principle (see theorem 3.3.1) suggests that if a social-choice function can be implemented by an arbitrary indirect mechanism (e.g., an open auction) then the same function can be implemented by a truthful direct revelation mechanism. Analyzing direct mechanisms is often more convenient, and the revelation principle allows one to argue that this restriction is without loss of generality. Yet there are cases where one prefers to implement and model the indirect version of a mechanism rather than its direct counterpart.
One argument used in the literature refers to interdependent valuations. The linkage principle implies that open auctions generally lead to higher expected revenue than sealed-bid auctions, with interdependent bidder valuations. Milgrom and Weber (1982) wrote: “One explanation of this inequality is that when bidders are uncertain about their valuations, they can acquire useful information by scrutinizing the bidding behavior of their competitors during the course of an (ascending) auction. That extra information weakens the winner's curse and leads to more aggressive bidding in the (ascending) auction, which accounts for the higher expected price.”
Another argument for open auctions is that the winners of an ascending auction do not need to reveal their true valuation to the auctioneer, only that it is above the secondhighest bid. With respect to multi-object auctions, Levin and Skrzypacz (2017) write that economists think of open auctions as having an advantage because bidders can discover gradually how their demands fit together. Most spectrum auctions are open auctions, largely for these reasons.
As a result, much recent research has focused on ascending multi-object auctions, i.e., generalizations of the single-object English auction where bidders can outbid each other iteratively. The models can be considered as algorithms, and we will try to understand the game-theoretical properties of these algorithms. In particular, we want to understand if there is a generalization of the English auction to ascending combinatorial auctions that also has a dominant strategy or ex post equilibrium. Unfortunately, the answer is negative for general valuations. However, there are positive results for restricted preferences.
A strong restriction of preferences is found in assignment markets, where bidders bid on multiple items but want to win at most one. This restriction allows us to formulate the allocation problem as an assignment problem, and there exists an ascending auction where truthful bidding is an ex post equilibrium.
Mechanism design studies the construction of economic mechanisms in the presence of rational agents with private information. It is sometimes called reverse game theory, as mechanism designers search for mechanisms which satisfy game-theoretical solution concepts and achieve good outcomes. For example, an auctioneer might want to design an auction mechanism which maximizes social welfare of the participants and exhibits dominant strategies for bidders to reveal their true valuations. We mostly discuss market mechanisms in this book, but mechanism design is not restricted to markets and the basic principles can be applied to various types of interactive decision making.
Mechanism design has relations to social choice theory, which is a field in economics focusing on the aggregation of individual preferences to reach a collective decision or social welfare. Social choice theory depends upon the ability to aggregate individual preferences into a combined social welfare function. Therefore, individual preferences are modeled in terms of a utility function. The ability to sum the utility functions of different individuals, as is done in auction theory in chapter 4, depends on the utility functions being comparable with each other; informally, individuals’ preferences must be measured with the same yardstick. The mechanism design literature on auctions typically assumes cardinal utility functions and interpersonal utility comparisons, as in the early social choice literature such as Bergson (1938) and Samuelson (1948). Note that Arrow (1950) and much of the subsequent literature assumes only ordinal preferences and rejects the idea of interpersonal utility comparisons. This means that utility cannot be measured and compared across individuals. The auction design literature as discussed in this book is an exception and assumes quasi-linear utility functions, where bidders have cardinal values for an allocation and maximize their payoff on the basis of prices in the market. Also, utility is comparable across agents. Cardinal preferences allow agents to express intensities, which is not possible with ordinal preferences only. For example, an agent might much prefer a diamond to a rock, but this information about intensity will be ignored if only ordinal preferences (diamond ≻ rock) are expressed.
There has been substantial progress in market design in the past few decades. The advent of the Internet and the availability of fast algorithms to solve computationally hard allocation problems has considerably extended the design space and has led to new types of markets which would have been impossible only 20 years ago. The theoretical models discussed in this book help one to understand when markets are efficient and when they are not. Many recent models for multi-object markets draw on the theory of linear programming and combinatorial optimization, at the same time adding to this theory in a fundamental way. This is one reason why market design has also found much attention in computer science and operations research lately.
Most models of markets in this book are normative and describe efficient markets with rational decision makers having independent and private values with quasilinear utility functions. In the natural sciences models are evaluated by their predictive accuracy in the laboratory and in the field. Unfortunately, the predictive accuracy of some auction models is low (see for example section 4.8). For example, Bayesian Nash equilibrium strategies require strong assumptions about a common-prior type distribution. Even if this distributional information is provided by a laboratory experiment, bidders do not always maximize payoff. Loss aversion, risk aversion, spite, or ex post regret can all affect the decision making of individuals. It is even less likely that bidders would follow a Bayesian Nash equilibrium strategy in the field, where the prior information of bidders is typically asymmetric and different between bidders.
Auction designs satisfying stronger solution concepts such as dominant strategies or ex post equilibrium strategies are promising, because they don't require a common prior assumption. However, such solution concepts are quite limiting in the design. In the standard independent-private-values model, the VCG mechanism is unique and, even if we allow the social welfare to be approximated, the scope of strategy-proof approximation mechanisms is narrow. Also, the VCG mechanism is strategy-proof for independent and private values. Jehiel et al. (2006) showed the impossibility of ex post implementation with interdependent values in multi-parameter settings. What is more, often the characteristics of market participants and the design goals are quite different from those assumed in the academic literature.
Many real-world markets require the solving of computationally hard allocation problems, and realistic problem sizes are often such that they cannot be solved optimally. For example, there have been spectrum auctions using the combinatorial clock auction with 100 licenses. The allocation problems are modeled as integer programs intended to produce an exact solution (see section 7.2), but this cannot always be guaranteed without restrictions. Actually, the number of bids in combinatorial spectrum auctions is typically restricted to a few hundred. Section 7.5 provides examples of allocation problems in other domains, sometimes using compact bid languages, which also cannot be solved optimally for larger instances. Sometimes it is acceptable to settle for suboptimal solutions. Often, heuristics are used to solve computationally hard problems. However, with heuristics we do not have worst-case bounds on the quality of the solution. In contrast, approximation algorithms provide worst-case bounds on the solution quality and a polynomial runtime. For this discussion, we expect the reader to have some basic familiarity with approximation algorithms; we provide an overview of the field and the necessary terminology and concepts in appendix B.4.
In this section, we analyze approximation mechanisms which provide good approximation ratios, which run in polynomial time, and which are also incentive-compatible. This leads to new notions of incentive compatibility beyond strategy-proofness (i.e., dominant-strategy incentive compatibility) and new types of mechanism. We often talk about truthfulness instead of incentive compatibility, both terms referring to different solution concepts setting incentives to reveal preferences truthfully.
One can think of the VCG mechanism as a black-box transformation from exact algorithms solving the allocation problem to a strategy-proof mechanism. A central question in this chapter is whether there is a similar black-box transformation, from approximation algorithms to truthful approximation mechanisms, which maintains the approximation ratios of non-truthful approximation algorithms.We show that black-box transformations for quite general allocation problems are possible with strong forms of truthfulness, when we use randomization.
Approximation algorithms are typically only available for idealized models such as knapsack problems or bin-packing problems. In practice, many allocation problems are quite messy and have many complicating side constraints that make it hard to find an approximation algorithm with a good performance guarantee.