To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The cyclic shift of a language L, defined as SHIFT(L) = {vu | uv ∈ L},is an operation known to preserve both regularity and context-freeness.Its descriptional complexity has been addressed in Maslov'spioneering paper on the state complexity of regular language operations[Soviet Math. Dokl.11 (1970) 1373–1375],where a high lower bound for partial DFAs using a growing alphabet was given.We improve this result by using a fixed 4-letter alphabet, obtaining a lower bound (n-1)! . 2(n-1)(n-2),which shows that the state complexity of cyclic shift is2n2+ nlogn - O(n) for alphabets with at least 4 letters.For 2- and 3-letter alphabets, we prove $2^{\Theta(n^2)}$ state complexity.We also establish a tight 2n2+1 lower bound forthe nondeterministic state complexity of this operation using a binary alphabet.
We propose a variation of Wythoff's game on three piles of tokens, in the sense that the losing positions can be derived from the Tribonacci word instead of the Fibonacci word for the two piles game. Thanks to the corresponding exotic numeration system built on the Tribonacci sequence, deciding whether a game position islosing or not can be computed in polynomial time.
We investigate the Laplacian eigenvalues of sparse random graphs Gnp. We show that in the case that the expected degree d = (n-1)p is bounded, the spectral gap of the normalized Laplacian is o(1). Nonetheless, w.h.p. G = Gnp has a large subgraph core(G) such that the spectral gap of is as large as 1-O (d−1/2). We derive similar results regarding the spectrum of the combinatorial Laplacian L(Gnp). The present paper complements the work of Chung, Lu and Vu [8] on the Laplacian spectra of random graphs with given expected degree sequences. Applied to Gnp, their results imply that in the ‘dense’ case d ≥ ln2n the spectral gap of is 1-O (d−1/2) w.h.p.
A simple explicit construction is provided of a partition-valued fragmentation process whose distribution on partitions of [n] = 1,. . .,n at time θ ≥ 0 is governed by the Ewens sampling formula with parameter θ. These partition-valued processes are exchangeable and consistent, as n varies. They can be derived by uniform sampling from a corresponding mass fragmentation process defined by cutting a unit interval at the points of a Poisson process with intensity θx−1dx on/mathbbR+, arranged to beintensifying as θ increases.
Szemerédi's regularity lemma for graphs has proved to be a powerful tool with many subsequent applications. The objective of this paper is to extend the techniques developed by Nagle, Skokan, and the authors and obtain a stronger and more ‘user-friendly’ regularity lemma for hypergraphs.
We continue the study of regular partitions of hypergraphs. In particular, we obtain corresponding counting lemmas for the regularity lemmas for hypergraphs from our paper ‘Regular Partitions of Hypergraphs: Regularity Lemmas’ (in this issue).
A widely studied model for generating binary sequences is to ‘evolve’ them on a tree according to a symmetric Markov process. We show that under this model distinguishing the true (model) tree from a false one is substantially ‘easier’ (in terms of the sequence length needed) than determining the true tree. The key tool is a new and near-tight Ramsey-type result for binary trees.
In 1972, Rosenfeld asked if every triangle-free graph could be embedded in the unit sphere Sd in such a way that two vertices joined by an edge have distance more than (ie, distance more than 2π/3 on the sphere). In 1978, Larman [LAR] disproved this conjecture, constructing a triangle-free graph for which the minimum length of an edge could not exceed . In addition, he conjectured that the right answer would be , which is not better than the class of all graphs. Larman'sconjecture was independently proved by Rosenfeld [MR] and Rödl [VR[. In this last paper it was shown that no bound better than can be found for graphs with arbitrarily large odd girth. We prove in this paper that this is stilltrue for arbitrarily large girth. We discuss then the case of triangle-free graphs with linear minimum degree.
The vertex-nullity interlace polynomial of a graph, described by Arratia, Bollobás and Sorkin in [3] as evolving from questions of DNA sequencing, and extended to a two-variable interlace polynomial by the same authors in [5], evokes many open questions. These include relations between the interlace polynomial and the Tutte polynomial and the computational complexity of the vertex-nullity interlace polynomial. Here, using the medial graph of a planar graph, we relate the one-variable vertex-nullity interlace polynomial to the classical Tutte polynomial when x=y, and conclude that, like the Tutte polynomial, it is in general #P-hard to compute. We also show a relation between the two-variable interlace polynomial and the topological Tutte polynomial of Bollobás and Riordan in [13].
We define the γ invariant as the coefficient of x1 in the vertex-nullity interlace polynomial, analogously to the β invariant, which is the coefficientof x1 in the Tutte polynomial. We then turn to distance hereditary graphs, characterized by Bandelt and Mulder in [9] as being constructed by a sequence ofadding pendant and twin vertices, and show that graphs in this class have γ invariant of 2n+1 when n true twins are added intheir construction. We furthermore show that bipartite distance hereditary graphs are exactly the class of graphs with γ invariant 2, just as the series-parallel graphs are exactly the class of graphs with β invariant 1. In addition, we show that a bipartite distance hereditary graph arises precisely as the circle graph of an Euler circuitin the oriented medial graph of a series-parallel graph. From this we conclude that the vertex-nullity interlace polynomial is polynomial time to compute for bipartite distancehereditary graphs, just as the Tutte polynomial is polynomial time to compute for series-parallel graphs.
We consider the parallel approximability of two problems arisingfrom high multiplicity scheduling, namely the unweightedmodel with variable processing requirements and the weighted model with identical processing requirements. These twoproblems are known to be modelled by a class of quadratic programsthat are efficiently solvable in polynomial time. On the parallelsetting, both problems are P-complete and hence cannot beefficiently solved in parallel unless P = NC. To deal with theparallel approximablity of these problems, we show first aparallel additive approximation procedure to a subclass ofmulti-valued quadratic programming, called smooth multi-valuedQP, which is defined by imposing certain restrictions onthe coefficients of the instance. We use this procedure to obtainparallel approximation to dense instances of the two problems by observing that denseinstances of these problems are instances of smooth multi-valuedQP. The dense instances of the problemsconsidered here are defined similarly as for other combinatorialproblems in the literature. For such instances we can find inparallel a near optimal schedule. The definition of smoothmulti-valued QP as well as the procedure forapproximating it in parallel are of interest independently of theapplication to the scheduling problems considered in this paper.
We study the problem of learning regular tree languages from text. We show that the framework of function distinguishability, as introduced by the author in [Theoret. Comput. Sci.290 (2003) 1679–1711], can be generalized from the case of string languages towards tree languages. This provides a large source of identifiable classes of regular tree languages. Each of these classes can be characterized in various ways. Moreover, we present a generic inference algorithm with polynomial update time and prove its correctness. In this way, we generalize previous works of Angluin, Sakakibara and ourselves. Moreover, we show that this way all regular tree languages can be approximately identified.
Computing the image of a regular language by the transitive closure of a relation is a central question in regular model checking. In a recent paper Bouajjani et al. [IEEE Comput. Soc. (2001) 399–408] proved that the class of regular languages L – called APC – of the form UjL0,jL1,jL2,j...Lkj,j, where the union is finite and each Li,j is either a single symbol or a language of the form B* with B a subset of the alphabet, is closed under all semi-commutation relations R. Moreover a recursive algorithm on the regular expressions was given to compute R*(L). This paper provides a new approach, based on automata, for the same problem. Our approach produces a simpler and more efficient algorithm which furthermore works for a larger class of regular languages closed under union, intersection, semi-commutation relations and conjugacy. The existence of this new class, PolC, answers the open question proposed in the paper of Bouajjani et al.
Nous établissons quelques propriétés des mots sturmiens et classifions, ensuite, les mots infinis qui possèdent, pour tout entier naturel non nul n, exactement n+2 facteurs de longueur n. Nous définissons également la notion d'insertion k à k sur les mots infinis puis nous calculons la complexité des mots obtenus en appliquant cette notion aux mots sturmiens. Enfin nous étudions l'équilibre et la palindromie d'une classe particulière de mots de complexité n+2 que nous appelons mots quasi-sturmiens par insertion et que nous caractérisons à l'aide des vecteurs de Parikh.
In this chapter we examine the representational and algorithmic aspects of a class of graph-theoretic models for multiplayer games. Known broadly as graphical games, these models specify restrictions on the direct payoff influences among the player population. In addition to a number of nice computational properties, these models have close connections to well-studied graphical models for probabilistic inference in machine learning and statistics.
Introduction
Representing multiplayer games with large player populations in the normal form is undesirable for both practical and conceptual reasons. On the practical side, the number of parameters that must be specified grows exponentially with the size of the population. On the conceptual side, the normal form may fail to capture structure that is present in the strategic interaction, and which can aid understanding of the game and computation of its equilibria. For this reason, there have been many proposals for parametric multiplayer game representations that are more succinct than the normal form, and attempt to model naturally arising structural properties. Examples include congestion and potential games and related models (Monderer and Shapley, 1996; Rosenthal, 1973).
Graphical games are a representation of multiplayer games meant to capture and exploit locality or sparsity of direct influences. They are most appropriate for large population games in which the payoffs of each player are determined by the actions of only a small subpopulation. As such, they form a natural counterpart to earlier parametric models.
This chapter studies the inefficiency of equilibria in noncooperative routing games, in which self-interested players route traffic through a congested network. Our goals are threefold: to introduce the most important models and examples of routing games; to survey optimal bounds on the price of anarchy in these models; and to develop proof techniques that are useful for bounding the inefficiency of equilibria in a range of applications.
Introduction
A majority of the current literature on the inefficiency of equilibria concerns routing games. One reason for this popularity is that routing games shed light on an important practical problem: how to route traffic in a large communication network, such as the Internet, that has no central authority. The routing games studied in this chapter are relevant for networks with “source routing,” in which each end user chooses a full route for its traffic, and also for networks in which traffic is routed in a distributed, congestion-sensitive manner. Section 18.6 contains further details on these applications.
This chapter focuses on two different models of routing games, although the inefficiency of equilibria has been successfully quantified in a range of others (see Section 18.6). The first model, nonatomic selfish routing, is a natural generalization of Pigou's example (Example 17.1) to more complex networks. The modifier “nonatomic” refers to the assumption that there are a very large number of players, each controlling a negligible fraction of the overall traffic.
As the Second World War was coming to its end, John von Neumann, arguably the foremost mathematician of that time, was busy initiating two intellectual currents that would shape the rest of the twentieth century: game theory and algorithms. In 1944 (16 years after the minmax theorem) he published, with Oscar Morgenstern, his Games and Economic Behavior, thus founding not only game theory but also utility theory and microeconomics. Two years later he wrote his draft report on the EDVAC, inaugurating the era of the digital computer and its software and its algorithms. Von Neumann wrote in 1952 the first paper in which a polynomial algorithm was hailed as a meaningful advance. And, he was the recipient, shortly before his early death four years later, of Gödel's letter in which the P vs. NP question was first discussed.
Could von Neumann have anticipated that his twin creations would converge half a century later? He was certainly far ahead of his contemporaries in his conception of computation as something dynamic, ubiquitous, and enmeshed in society, almost organic – witness his self-reproducing automata, his fault-tolerant network design, and his prediction that computing technology will advance in lock-step with the economy (for which he had already postulated exponential growth in his 1937 Vienna Colloquium paper).
Combinatorial polynomial time algorithms are presented for finding equilibrium prices and allocations for the linear utilities case of the Fisher and Arrow–Debreu models using the primal-dual schema and an auction-based approach, respectively. An intersting feature of the first algorithm is that it finds an optimal solution to a nonlinear convex program, the Eisenberg-Gale program.
Resource allocation markets in Kelly's model are also discussed and a strongly polynomial combinatorial algorithm is presented for one of them.
Introduction
Thinkers and philosophers have pondered over the notions of markets and money through the ages. The credit for initiating formal mathematical modeling and study of these notions is generally attributed to nineteenth-century economist Leon Walras (1874). The fact that Western economies are capitalistic had a lot to do with the over-whelming importance given to this study within mathematical economics – essentially, our most critical decision-making is relegated to pricing mechanisms. They largely determine the relative prices of goods and services, ensure that the economy is efficient, in that goods and services are made available to entities that produce items that are most in demand, and ensure a stable operation of the economy.
A central tenet in pricing mechanisms is that prices be such that demand equals supply; that is, the economy should operate at equilibrium. It is not surprising therefore that perhaps the most celebrated theorem within general equilibrium theory, the Arrow–Debreu Theorem, establishes precisely the existence of such prices under a very general model of the economy.
In combinatorial auctions, a large number of items are auctioned concurrently and bidders are allowed to express preferences on bundles of items. This is preferable to selling each item separately when there are dependencies between the different items. This problem has direct applications, may be viewed as a general abstraction of complex resource allocation, and is the paradigmatic problem on the interface of economics and computer science. We give a brief survey of this field, concentrating on theoretical treatment.
Introduction
A large part of computer science as well as a large part of economics may be viewed as addressing the “allocation problem”: how should we allocate “resources” among the different possible uses of these resources. An auction of a single item may be viewed as a simple abstraction of this question: we have a single indivisible resource, and two (or more) players desire using it – who should get it? Being such a simple and general abstraction explains the pivotal role of simple auctions in mechanism design theory.
From a similar point of view, “combinatorial auctions” abstract this issue when multiple resources are involved: how do I allocate a collection of interrelated resources? In general, the “interrelations” of the different resources may be combinatorially complex, and thus handling them requires effective handling of this complexity. It should thus come as no surprise that the field of “combinatorial auctions” – the subject of this chapter – is gaining a central place in the interface between computer science and economics.