To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The cover time of a graph is a celebrated example of a parameter that is easy to approximate using a randomized algorithm, but for which no constant factor deterministic polynomial time approximation is known. A breakthrough due to Kahn, Kim, Lovász and Vu [25] yielded a (log logn)2 polynomial time approximation. We refine the upper bound of [25], and show that the resulting bound is sharp and explicitly computable in random graphs. Cooper and Frieze showed that the cover time of the largest component of the Erdős–Rényi random graph G(n, c/n) in the supercritical regime with c > 1 fixed, is asymptotic to ϕ(c)nlog2n, where ϕ(c) → 1 as c ↓ 1. However, our new bound implies that the cover time for the critical Erdős–Rényi random graph G(n, 1/n) has order n, and shows how the cover time evolves from the critical window to the supercritical phase. Our general estimate also yields the order of the cover time for a variety of other concrete graphs, including critical percolation clusters on the Hamming hypercube {0, 1}n, on high-girth expanders, and on tori ℤdn for fixed large d. This approach also gives a simpler proof of a result of Aldous [2] that the cover time of a uniform labelled tree on k vertices is of order k3/2. For the graphs we consider, our results show that the blanket time, introduced by Winkler and Zuckerman [45], is within a constant factor of the cover time. Finally, we prove that for any connected graph, adding an edge can increase the cover time by at most a factor of 4.
We introduce a discrete random process which we call the passenger model, and show that it is connected to a certain random model of the assignment problem and in particular to the so-called Buck–Chan–Robbins urn process. We propose a conjecture on the distribution of the location of the minimum cost assignment in a cost matrix with zeros at specified positions and remaining entries of exponential distribution. The conjecture is consistent with earlier results on the participation probability of an individual matrix entry. We also use the passenger model to verify a conjecture by V. Dotsenko on the assignment problem.
An r-cut of the complete r-uniform hypergraph Krn is obtained by partitioning its vertex set into r parts and taking all edges that meet every part in exactly one vertex. In other words it is the edge set of a spanning complete r-partite subhypergraph of Krn. An r-cut cover is a collection of r-cuts such that each edge of Krn is in at least one of the cuts. While in the graph case r = 2 any 2-cut cover on average covers each edge at least 2-o(1) times, when r is odd we exhibit an r-cut cover in which each edge is covered exactly once. When r is even no such decomposition can exist, but we can bound the average number of times an edge is cut in an r-cut cover between and . The upper bound construction can be reformulated in terms of a natural polyhedral problem or as a probability problem, and we solve the latter asymptotically.
Szemerédi's Regularity Lemma is an important tool for analysing the structure of dense graphs. There are versions of the Regularity Lemma for sparse graphs, but these only apply when the graph satisfies some local density condition. In this paper, we prove a sparse Regularity Lemma that holds for all graphs. More generally, we give a Regularity Lemma that holds for arbitrary real matrices.
In this paper we study planar first-passage percolation (FPP) models on random Delaunay triangulations. In [14], Vahidi-Asl and Wierman showed, using sub-additivity theory, that the rescaled first-passage time converges to a finite and non-negative constant μ. We show a sufficient condition to ensure that μ>0 and derive some upper bounds for fluctuations. Our proofs are based on percolation ideas and on the method of martingales with bounded increments.
In recent years game theory has had a substantial impact on computer science, especially on Internet- and e-commerce-related issues. Algorithmic Game Theory, first published in 2007, develops the central ideas and results of this exciting area in a clear and succinct manner. More than 40 of the top researchers in this field have written chapters that go from the foundations to the state of the art. Basic chapters on algorithmic methods for equilibria, mechanism design and combinatorial auctions are followed by chapters on important game theory applications such as incentives and pricing, cost sharing, information markets and cryptography and security. This definitive work will set the tone of research for the next few years and beyond. Students, researchers, and practitioners alike need to learn more about these fascinating theoretical developments and their widespread practical application.
Let P be a set of n points in ℝ3, and let k ≤ n be an integer. A sphere σ is k-rich with respect to P if |σ ∩ P| ≥ k, and is η-non-degenerate, for a fixed fraction 0 < η < 1, if no circle γ ⊂ σ contains more than η|σ ∩ P| points of P.
We improve the previous bound given in [1] on the number of k-rich η-non-degenerate spheres in 3-space with respect to any set of n points in ℝ3, from O(n4/k5 + n3/k3), which holds for all 0 < η < 1/2, to O*(n4/k11/2 + n2/k2), which holds for all 0 < η < 1 (in both bounds, the constants of proportionality depend on η). The new bound implies the improved upper bound O*(n58/27) ≈ O(n2.1482) on the number of mutually similar triangles spanned by n points in ℝ3; the previous bound was O(n13/6) ≈ O(n2.1667) [1].
Extending an old conjecture of Tutte, Jaeger conjectured in 1988 that for any fixed integer p ≥ 1, the edges of any 4p-edge connected graph can be oriented so that the difference between the outdegree and the indegree of each vertex is divisible by 2p+1. It is known that it suffices to prove this conjecture for (4p+1)-regular, 4p-edge connected graphs. Here we show that there exists a finite p0 such that for every p > p0 the assertion of the conjecture holds for all (4p+1)-regular graphs that satisfy some mild quasi-random properties, namely, the absolute value of each of their non-trivial eigenvalues is at most c1p2/3 and the neighbourhood of each vertex contains at most c2p3/2 edges, where c1, c2 > 0 are two absolute constants. In particular, this implies that for p > p0 the assertion of the conjecture holds asymptotically almost surely for random (4p+1)-regular graphs.
Let d = (d1, d2, . . ., dn) be a vector of non-negative integers with even sum. We prove some basic facts about the structure of a random graph with degree sequence d, including the probability of a given subgraph or induced subgraph.
Although there are many results of this kind, they are restricted to the sparse case with only a few exceptions. Our focus is instead on the case where the average degree is approximately a constant fraction of n.
Our approach is the multidimensional saddle-point method. This extends the enumerative work of McKay and Wormald (1990) and is analogous to the theory developed for bipartite graphs by Greenhill and McKay (2009).
This text closely examines ideas, analysis, and implementation details of data structures as a specialised topic in applied algorithms. It looks at efficient ways to realise query and update operations on sets of numbers, intervals, or strings by various data structures, including: search trees; structures for sets of intervals or piece-wise constant functions; orthogonal range search structures; heaps; union-find structures; dynamization and persistence of structures; structures for strings; and hash tables. Instead of relegating data structures to trivial material used to illustrate object-oriented programming methodology, this is the first volume to show data structures as a crucial algorithmic topic. Numerous code examples in C and more than 500 references make Advanced Data Structures an indispensable text.
Information propagation through peer-to-peer systems, online social systems, wireless mobile ad hoc networks and other modern structures can be modelled as an epidemic on a network of contacts. Understanding how epidemic processes interact with network topology allows us to predict ultimate course, understand phase transitions and develop strategies to control and optimise dissemination. This book is a concise introduction for applied mathematicians and computer scientists to basic models, analytical tools and mathematical and algorithmic results. Mathematical tools introduced include coupling methods, Poisson approximation (the Stein–Chen method), concentration inequalities (Chernoff bounds and Azuma–Hoeffding inequality) and branching processes. The authors examine the small-world phenomenon, preferential attachment, as well as classical epidemics. Each chapter ends with pointers to the wider literature. An ideal accompaniment for graduate courses, this book is also for researchers (statistical physicists, biologists, social scientists) who need an efficient guide to modern approaches to epidemic modelling on networks.
The study of the structural properties of large random planar graphs has become in recent years a field of intense research in computer science and discrete mathematics. Nowadays, a random planar graph is an important and challenging model for evaluating methods that are developed to study properties of random graphs from classes with structural side constraints.
In this paper we focus on the structure of random 2-connected planar graphs regarding the sizes of their 3-connected building blocks, which we call cores. In fact, we prove a general theorem regarding random biconnected graphs from various classes. If Bn is a graph drawn uniformly at random from a suitable class of labelled biconnected graphs, then we show that with probability 1 − o(1) as n → ∞, Bn belongs to exactly one of the following categories:
(i) either there is a unique giant core in Bn, that is, there is a 0 < c = c() < 1 such that the largest core contains ~ cn vertices, and every other core contains at most nα vertices, where 0 < α = α() < 1;
(ii) or all cores of Bn contain O(logn) vertices.
Moreover, we find the critical condition that determines the category to which Bn belongs, and also provide sharp concentration results for the counts of cores of all sizes between 1 and n. As a corollary, we obtain that a random biconnected planar graph belongs to category (i), where in particular c = 0.765. . . and α = 2/3.
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of the repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate ‘faulty’ or ‘unexpected’ behaviour), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. Solution concepts that try to address these shortcomings of Nash equilibrium are discussed.
Introduction
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. Intuitively, a Nash equilibrium is a strategy profile (a collection of strategies, one for each player in the game) such that no player can do better by deviating. The intuition behind Nash equilibrium is that it represents a possible steady state of play. It is a fixed-point where each player holds correct beliefs about what other players are doing, and plays a best response to those beliefs. Part of what makes Nash equilibrium so attractive is that in games where each player has only finitely many possible deterministic strategies, and we allow mixed (i.e., randomised) strategies, there is guaranteed to be a Nash equilibrium [Nash, 1950a] (this was, in fact, the key result of Nash's thesis).
This chapter gives an introduction to the connection between automata theory and the theory of two player games of infinite duration. We illustrate how the theory of automata on infinite words can be used to solve games with complex winning conditions, for example specified by logical formulae. Conversely, infinite games are a useful tool to solve problems for automata on infinite trees such as complementation and the emptiness test.
Introduction
The aim of this chapter is to explain some interesting connections between automata theory and games of infinite duration. The context in which these connections have been established is the problem of automatic circuit synthesis from specifications, as posed by Church [1962]. A circuit can be viewed as a device that transforms input sequences of bit vectors into output sequences of bit vectors. If the circuit acts as a kind of control device, then these sequences are assumed to be infinite because the computation should never halt.
The task in synthesis is to construct such a circuit based on a formal specification describing the desired input/output behaviour. This problem setting can be viewed as a game of infinite duration between two players: The first player provides the bit vectors for the input, and the second player produces the output bit vectors. The winning condition of the game is given by the specification. The goal is to find a strategy for the second player such that all pairs of input/output sequences that can be produced according to the strategy satisfy the specification.
We study observation-based strategies for two-player turn-based games played on graphs with parity objectives. An observation-based strategy relies on imperfect information about the history of a play, namely, on the past sequence of observations. Such games occur in the synthesis of a controller that does not see the private state of the plant. Our main results are twofold. First, we give a fixed-point algorithm for computing the set of states from which a player can win with a deterministic observation-based strategy for a parity objective. Second, we give an algorithm for computing the set of states from which a player can win with probability 1 with a randomised observation-based strategy for a reachability objective. This set is of interest because in the absence of perfect information, randomised strategies are more powerful than deterministic ones.
Introduction
Games are natural models for reactive systems. We consider zero-sum two player turn-based games of infinite duration played on finite graphs. One player represents a control program, and the second player represents its environment. The graph describes the possible interactions of the system, and the game is of infinite duration because reactive systems are usually not expected to terminate. In the simplest setting, the game is turn-based and with perfect information, meaning that the players have full knowledge of both the game structure and the sequence of moves played by the adversary. The winning condition in a zero-sum graph game is defined by a set of plays that the first player aims to enforce, and that the second player aims to avoid.
In this chapter we discuss relationships between logic and games, focusing on first-order logic and fixed-point logics, and on reachability and parity games. We discuss the general notion of model-checking games. While it is easily seen that the semantics of first-order logic can be captured by reachability games, more effort is required to see that parity games are the appropriate games for evaluating formulae from least fixed-point logic and the modal µ-calculus. The algorithmic consequences of this result are discussed. We also explore the reverse relationship between games and logic, namely the question of how winning regions in games are definable in logic. Finally the connections between logic and games are discussed for more complicated scenarios provided by inflationary fixed-point logic and the quantitative µ-calculus.
Introduction
The idea that logical reasoning can be seen as a dialectic game, where a proponent attempts to convince an opponent of the truth of a proposition is very old. Indeed, it can be traced back to the studies of Zeno, Socrates, and Aristotle on logic and rhetoric. Modern manifestation of this idea are the presentation of the semantics of logical formulae by means of model-checking games and the algorithmic evaluation of logical statements via the synthesis of winning strategies in such games.
model-checking games are two-player games played on an arena which is formed as the product of a structure and a formula ψ where one player, called the Verifier, attempts to prove that ψ is true in while the other player, the Falsifier, attempts to refute this.