To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study of the structural properties of large random planar graphs has become in recent years a field of intense research in computer science and discrete mathematics. Nowadays, a random planar graph is an important and challenging model for evaluating methods that are developed to study properties of random graphs from classes with structural side constraints.
In this paper we focus on the structure of random 2-connected planar graphs regarding the sizes of their 3-connected building blocks, which we call cores. In fact, we prove a general theorem regarding random biconnected graphs from various classes. If Bn is a graph drawn uniformly at random from a suitable class of labelled biconnected graphs, then we show that with probability 1 − o(1) as n → ∞, Bn belongs to exactly one of the following categories:
(i) either there is a unique giant core in Bn, that is, there is a 0 < c = c() < 1 such that the largest core contains ~ cn vertices, and every other core contains at most nα vertices, where 0 < α = α() < 1;
(ii) or all cores of Bn contain O(logn) vertices.
Moreover, we find the critical condition that determines the category to which Bn belongs, and also provide sharp concentration results for the counts of cores of all sizes between 1 and n. As a corollary, we obtain that a random biconnected planar graph belongs to category (i), where in particular c = 0.765. . . and α = 2/3.
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of the repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate ‘faulty’ or ‘unexpected’ behaviour), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. Solution concepts that try to address these shortcomings of Nash equilibrium are discussed.
Introduction
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. Intuitively, a Nash equilibrium is a strategy profile (a collection of strategies, one for each player in the game) such that no player can do better by deviating. The intuition behind Nash equilibrium is that it represents a possible steady state of play. It is a fixed-point where each player holds correct beliefs about what other players are doing, and plays a best response to those beliefs. Part of what makes Nash equilibrium so attractive is that in games where each player has only finitely many possible deterministic strategies, and we allow mixed (i.e., randomised) strategies, there is guaranteed to be a Nash equilibrium [Nash, 1950a] (this was, in fact, the key result of Nash's thesis).
This chapter gives an introduction to the connection between automata theory and the theory of two player games of infinite duration. We illustrate how the theory of automata on infinite words can be used to solve games with complex winning conditions, for example specified by logical formulae. Conversely, infinite games are a useful tool to solve problems for automata on infinite trees such as complementation and the emptiness test.
Introduction
The aim of this chapter is to explain some interesting connections between automata theory and games of infinite duration. The context in which these connections have been established is the problem of automatic circuit synthesis from specifications, as posed by Church [1962]. A circuit can be viewed as a device that transforms input sequences of bit vectors into output sequences of bit vectors. If the circuit acts as a kind of control device, then these sequences are assumed to be infinite because the computation should never halt.
The task in synthesis is to construct such a circuit based on a formal specification describing the desired input/output behaviour. This problem setting can be viewed as a game of infinite duration between two players: The first player provides the bit vectors for the input, and the second player produces the output bit vectors. The winning condition of the game is given by the specification. The goal is to find a strategy for the second player such that all pairs of input/output sequences that can be produced according to the strategy satisfy the specification.
We study observation-based strategies for two-player turn-based games played on graphs with parity objectives. An observation-based strategy relies on imperfect information about the history of a play, namely, on the past sequence of observations. Such games occur in the synthesis of a controller that does not see the private state of the plant. Our main results are twofold. First, we give a fixed-point algorithm for computing the set of states from which a player can win with a deterministic observation-based strategy for a parity objective. Second, we give an algorithm for computing the set of states from which a player can win with probability 1 with a randomised observation-based strategy for a reachability objective. This set is of interest because in the absence of perfect information, randomised strategies are more powerful than deterministic ones.
Introduction
Games are natural models for reactive systems. We consider zero-sum two player turn-based games of infinite duration played on finite graphs. One player represents a control program, and the second player represents its environment. The graph describes the possible interactions of the system, and the game is of infinite duration because reactive systems are usually not expected to terminate. In the simplest setting, the game is turn-based and with perfect information, meaning that the players have full knowledge of both the game structure and the sequence of moves played by the adversary. The winning condition in a zero-sum graph game is defined by a set of plays that the first player aims to enforce, and that the second player aims to avoid.
In this chapter we discuss relationships between logic and games, focusing on first-order logic and fixed-point logics, and on reachability and parity games. We discuss the general notion of model-checking games. While it is easily seen that the semantics of first-order logic can be captured by reachability games, more effort is required to see that parity games are the appropriate games for evaluating formulae from least fixed-point logic and the modal µ-calculus. The algorithmic consequences of this result are discussed. We also explore the reverse relationship between games and logic, namely the question of how winning regions in games are definable in logic. Finally the connections between logic and games are discussed for more complicated scenarios provided by inflationary fixed-point logic and the quantitative µ-calculus.
Introduction
The idea that logical reasoning can be seen as a dialectic game, where a proponent attempts to convince an opponent of the truth of a proposition is very old. Indeed, it can be traced back to the studies of Zeno, Socrates, and Aristotle on logic and rhetoric. Modern manifestation of this idea are the presentation of the semantics of logical formulae by means of model-checking games and the algorithmic evaluation of logical statements via the synthesis of winning strategies in such games.
model-checking games are two-player games played on an arena which is formed as the product of a structure and a formula ψ where one player, called the Verifier, attempts to prove that ψ is true in while the other player, the Falsifier, attempts to refute this.
This is a short introduction to the subject of strategic games. We focus on the concepts of best response, Nash equilibrium, strict and weak dominance, and mixed strategies, and study the relation between these concepts in the context of the iterated elimination of strategies. Also, we discuss some variants of the original definition of a strategic game. Finally, we introduce the basics of mechanism design and use pre-Bayesian games to explain it.
Introduction
Mathematical game theory, as launched by Von Neumann and Morgenstern in their seminal book, von Neumann and Morgenstern [1944], followed by Nash's contributions Nash [1950, 1951], has become a standard tool in economics for the study and description of various economic processes, including competition, cooperation, collusion, strategic behaviour and bargaining. Since then it has also been successfully used in biology, political sciences, psychology and sociology. With the advent of the Internet game theory became increasingly relevant in computer science.
One of the main areas in game theory are strategic games (sometimes also called non-cooperative games), which form a simple model of interaction between profit maximising players. In strategic games each player has a payoff function that he aims to maximise and the value of this function depends on the decisions taken simultaneously by all players. Such a simple description is still amenable to various interpretations, depending on the assumptions about the existence of private information.
This chapter provides an introduction to graph searching games, a form of one- or two-player games on graphs that have been studied intensively in algorithmic graph theory. The unifying idea of graph searching games is that a number of searchers wants to find a fugitive on an arena defined by a graph or hypergraph. Depending on the precise definition of moves allowed for the searchers and the fugitive and on the type of graph the game is played on, this yields a huge variety of graph searching games.
The objective of this chapter is to introduce and motivate the main concepts studied in graph searching and to demonstrate some of the central ideas developed in this area.
Introduction
Graph searching games are a form of two-player games where one player, the Searcher or Cop, tries to catch a Fugitive or Robber. The study of graph searching games dates back to the dawn of mankind: running after one another or after an animal has been one of the earliest activities of mankind and surely our hunter-gatherer ancestors thought about ways of optimising their search strategies to maximise their success.
Game playing is a powerful metaphor that fits many situations where interaction between autonomous agents plays a central role. Numerous tasks in computer science, such as design, synthesis, verification, testing, query evaluation, planning, etc. can be formulated in game-theoretic terms. Viewing them abstractly as games reveals the underlying algorithmic questions, and helps to clarify relationships between problem domains. As an organisational principle, games offer a fresh and intuitive way of thinking through complex issues.
As a result mathematical models of games play an increasingly important role in a number of scientific disciplines and, in particular, in many branches of computer science. One of the scientific communities studying and applying games in computer science has formed around the European Network ‘Games for Design and Verification’ (GAMES), which proposes a research and training programme for the design and verification of computing systems, using a methodology that is based on the interplay of finite and infinite games, mathematical logic and automata theory.
This network had initially been set up as a Marie Curie Research Training Network, funded by the European Union between 2002 and 2006. In its four years of existence this network built a strong European research community that did not exist before. Its flagship activity – the annual series of GAMES workshops – saw an ever-increasing number of participants from both within and outside Europe. The ESF Research Networking Programme GAMES, funded by the European Science Foundation ESF from 2008 to 2013, builds on the momentum of this first GAMES network, but it is scientifically broader and more ambitious, and it covers more countries and more research groups.
Minimizing a deterministic finite automata (DFA) is a very important problem in theory of automata and formal languages.Hopcroft's algorithm represents the fastest known solution to the such a problem. In this paper we analyze the behavior of this algorithm on a family binary automata, called tree-like automata, associated to binary labeled trees constructed by words. We prove that all the executions of the algorithm on tree-like automata associated to trees, constructed by standard words, have running time with the same asymptotic growth rate. In particular, we provide a lower and upper bound for the running time of the algorithm expressed in terms of combinatorial properties of the trees. We consider also tree-like automata associated to trees constructed by de Brujin words,and we prove that a queue implementation of the waiting set gives a Θ(n log n) execution while a stack implementation produces a linear execution. Such a result confirms the conjecture given in [A. Paun, M. Paun and A. Rodríguez-Patón. Theoret. Comput. Sci.410 (2009) 2424–2430.] formulated for a family of unary automata and, in addition, gives a positive answer also for the binary case.
Wang automata are devices for picture language recognition recently introduced by us, which characterize the class REC of recognizable picture languages. Thus, Wang automata are equivalent to tiling systems or online tessellation acceptors, and are based like Wang systems on labeled Wang tiles. The present work focus on scanning strategies, to prove that the ones Wang automata are based on are those following four kinds of movements: boustrophedonic, “L-like”, “U-like”, and spirals.
An ever present, common sense idea in language modelling research is that, for aword to be a valid phrase, it should comply with multiple constraints atonce. A new language definition model is studied, based on agreement or consensusbetween similar strings. Considering a regular set of strings over a bipartitealphabet made by pairs of unmarked/marked symbols, a match relation isintroduced, in order to specify when such strings agree. Then a regular setover the bipartite alphabet can be interpreted as specifying another languageover the unmarked alphabet, called the consensual language. A word is in theconsensual language if a set of corresponding matching strings is in theoriginal language. The family thus defined includes the regular languages andalso interesting non-semilinear ones. The word problem can be solved inNLOGSPACE, hence in P time. The emptiness problem is undecidable. Closure properties areproved for intersection with regular sets and inverse alphabetical homomorphism.Several conditions for a consensual definition to yield a regular language arepresented, and it is shown that the size of a consensual specification ofregular languages can be in a logarithmic ratio with respect to a DFA. Thefamily is incomparable with context-free and tree-adjoining grammar families.
Quantum annealing, or quantum stochastic optimization, is a classical randomized algorithm which provides good heuristics for the solution of hard optimization problems. The algorithm, suggested by the behaviour of quantum systems, is an example of proficuous cross contamination between classical and quantum computer science. In this survey paper we illustrate how hard combinatorial problems are tackled by quantum computation and present some examples of the heuristics provided by quantum annealing. We also present preliminary results about the application of quantum dissipation (as an alternative to imaginary time evolution) to the task of driving a quantum system toward its state of lowest energy.
We add sequential operations to the categorical algebra of weighted andMarkov automata introduced in [L. de Francesco Albasini, N. Sabadini and R.F.C. Walters, arXiv:0909.4136]. The extra expressiveness of the algebra permits the description of hierarchical systems, and ones withevolving geometry. We make a comparison with the probabilistic automata of Lynch et al. [SIAM J. Comput.37 (2007) 977–1013].