To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is devoted to five main principles of algorithm design: divide and conquer, greedy algorithms, thinning, dynamic programming, and exhaustive search. These principles are presented using Haskell, a purely functional language, leading to simpler explanations and shorter programs than would be obtained with imperative languages. Carefully selected examples, both new and standard, reveal the commonalities and highlight the differences between algorithms. The algorithm developments use equational reasoning where applicable, clarifying the applicability conditions and correctness arguments. Every chapter concludes with exercises (nearly 300 in total), each with complete answers, allowing the reader to consolidate their understanding and apply the techniques to a range of problems. The book serves students (both undergraduate and postgraduate), researchers, teachers, and professionals who want to know more about what goes into a good algorithm and how such algorithms can be expressed in purely functional terms.
Let $\{D_M\}_{M\geq 0}$ be the n-vertex random directed graph process, where $D_0$ is the empty directed graph on n vertices, and subsequent directed graphs in the sequence are obtained by the addition of a new directed edge uniformly at random. For each $$\varepsilon> 0$$, we show that, almost surely, any directed graph $D_M$ with minimum in- and out-degree at least 1 is not only Hamiltonian (as shown by Frieze), but remains Hamiltonian when edges are removed, as long as at most $1/2-\varepsilon$ of both the in- and out-edges incident to each vertex are removed. We say such a directed graph is $(1/2-\varepsilon)$-resiliently Hamiltonian. Furthermore, for each $\varepsilon > 0$, we show that, almost surely, each directed graph $D_M$ in the sequence is not $(1/2+\varepsilon)$-resiliently Hamiltonian.
This improves a result of Ferber, Nenadov, Noever, Peter and Škorić who showed, for each $\varepsilon > 0$, that the binomial random directed graph $D(n,p)$ is almost surely $(1/2-\varepsilon)$-resiliently Hamiltonian if $p=\omega(\log^8n/n)$.
For fixed graphs F1,…,Fr, we prove an upper bound on the threshold function for the property that G(n, p) → (F1,…,Fr). This establishes the 1-statement of a conjecture of Kohayakawa and Kreuter.
A classical result of Erdős and, independently, of Bondy and Simonovits [3] says that the maximum number of edges in an n-vertex graph not containing C2k, the cycle of length 2k, is O(n1+1/k). Simonovits established a corresponding supersaturation result for C2k’s, showing that there exist positive constants C,c depending only on k such that every n-vertex graph G with e(G)⩾ Cn1+1/k contains at least c(e(G)/v(G))2k copies of C2k, this number of copies tightly achieved by the random graph (up to a multiplicative constant).
In this paper we extend Simonovits' result to a supersaturation result of r-uniform linear cycles of even length in r-uniform linear hypergraphs. Our proof is self-contained and includes the r = 2 case. As an auxiliary tool, we develop a reduction lemma from general host graphs to almost-regular host graphs that can be used for other supersaturation problems, and may therefore be of independent interest.
In this paper we propose a polynomial-time deterministic algorithm for approximately counting the k-colourings of the random graph G(n, d/n), for constant d>0. In particular, our algorithm computes in polynomial time a $(1\pm n^{-\Omega(1)})$-approximation of the so-called ‘free energy’ of the k-colourings of G(n, d/n), for $k\geq (1+\varepsilon) d$ with probability $1-o(1)$ over the graph instances.
Our algorithm uses spatial correlation decay to compute numerically estimates of marginals of the Gibbs distribution. Spatial correlation decay has been used in different counting schemes for deterministic counting. So far algorithms have exploited a certain kind of set-to-point correlation decay, e.g. the so-called Gibbs uniqueness. Here we deviate from this setting and exploit a point-to-point correlation decay. The spatial mixing requirement is that for a pair of vertices the correlation between their corresponding configurations becomes weaker with their distance.
Furthermore, our approach generalizes in that it allows us to compute the Gibbs marginals for small sets of nearby vertices. Also, we establish a connection between the fluctuations of the number of colourings of G(n, d/n) and the fluctuations of the number of short cycles and edges in the graph.
We study structural properties of graphs with bounded clique number and high minimum degree. In particular, we show that there exists a function L = L(r,ɛ) such that every Kr-free graph G on n vertices with minimum degree at least ((2r–5)/(2r–3)+ɛ)n is homomorphic to a Kr-free graph on at most L vertices. It is known that the required minimum degree condition is approximately best possible for this result.
For r = 3 this result was obtained by Łuczak (2006) and, more recently, Goddard and Lyle (2011) deduced the general case from Łuczak’s result. Łuczak’s proof was based on an application of Szemerédi’s regularity lemma and, as a consequence, it only gave rise to a tower-type bound on L(3, ɛ). The proof presented here replaces the application of the regularity lemma by a probabilistic argument, which yields a bound for L(r, ɛ) that is doubly exponential in poly(ɛ).
We study random unlabelled k-trees by combining the colouring approach by Gainer-Dewar and Gessel (2014) with the cycle-pointing method by Bodirsky, Fusy, Kang and Vigerske (2011). Our main applications are Gromov–Hausdorff–Prokhorov and Benjamini–Schramm limits that describe their asymptotic geometric shape on a global and local scale as the number of (k + 1)-cliques tends to infinity.
We prove an essentially sharp $\tilde \Omega (n/k)$ lower bound on the k-round distributional complexity of the k-step pointer chasing problem under the uniform distribution, when Bob speaks first. This is an improvement over Nisan and Wigderson’s $\tilde \Omega (n/{k^2})$ lower bound, and essentially matches the randomized lower bound proved by Klauck. The proof is information-theoretic, and a key part of it is using asymmetric triangular discrimination instead of total variation distance; this idea may be useful elsewhere.
We give an efficient algorithm that, given a graph G and a partition V1,…,Vm of its vertex set, finds either an independent transversal (an independent set {v1,…,vm} in G such that ${v_i} \in {V_i}$ for each i), or a subset ${\cal B}$ of vertex classes such that the subgraph of G induced by $\bigcup\nolimits_{\cal B}$ has a small dominating set. A non-algorithmic proof of this result has been known for a number of years and has been used to solve many other problems. Thus we are able to give algorithmic versions of many of these applications, a few of which we describe explicitly here.
The last two decades have seen a wave of exciting new developments in the theory of algorithmic randomness and its applications to other areas of mathematics. This volume surveys much of the recent work that has not been included in published volumes until now. It contains a range of articles on algorithmic randomness and its interactions with closely related topics such as computability theory and computational complexity, as well as wider applications in areas of mathematics including analysis, probability, and ergodic theory. In addition to being an indispensable reference for researchers in algorithmic randomness, the unified view of the theory presented here makes this an excellent entry point for graduate students and other newcomers to the field.
We present an overview of higher randomness and its recent developments. After an introduction, we provide in the second section some background on higher computability, presenting in particular $\Pi^1_1$ and $\Sigma^1_1$ sets from the viewpoint of the computability theorist. In the third section we give an overview of the different higher randomness classes: $\Delta^1_1$-randomness, $\Pi^1_1$-Martin-Löf randomness, higher weak-2 randomness, higher difference randomness, and $\Pi^1_1$-randomness. We then move on to study each of these classes, separating them and inspecting their respective lowness classes. We put more attention on $\Pi^1_1$-Martin-Löf randomness and $\Pi^1_1$-randomness: The former is the higher analogue of the most well-known and studied class in classical algorithmic randomness. We show in particular how to lift the main classical randomness theorems to the higher settings by putting continuity in higher reductions and relativisations. The latter presents, as we will see, many remarkable properties and does not have any analogue in classical randomness. Finally in the eighth section we study randomness along with a higher hierarchy of complexity of sets, motivated by the notion of higher weak-2 randomness. We show that this hierarchy collapses eventually.
In this introductory survey, we provide an overview of the major developments of algorithmic randomness with an eye towards the historical development of the discipline. First we give a brief introduction to computability theory and the underlying mathematical concepts that later appear in the survey. Next we selectively cover four broad periods in which the primary developments in algorithmic randomness occurred: (1) the mid-1960s to mid-1970s, in which the main definitions of algorithmic randomness were laid out and the basic properties of random sequences were established; (2) the 1980s through the 1990s, which featured intermittent and important work from a handful of researchers; (3) the 2000s, during which there was an explosion of results as the discipline matured into a fully-fledged subbranch of computability theory; and (4) the early 2010s, in which ties between algorithmic randomness and other subfields of mathematics were discovered. The aim of this survey is to provide a point of entry for newcomers to the field and a useful reference for practitioners.
The halting probability of a Turing machine was introduced by Chaitin, who also proved that it is an algorithmically random real number and named it Omega. Since his seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of this number (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the significance of this number to many aspects of contemporary algorithmic information theory. The purpose of this survey is to expose these developments and tell a story about Omega which outlines its multi-faceted mathematical properties and roles in algorithmic randomness.
This is a survey of constructive and computable measure theory with an emphasis on the close connections with algorithmic randomness. We give a brief history of constructive measure theory from Brouwer to the present, emphasizing how Schnorr randomness is the randomness notion implicit in the work of Brouwer, Bishop, Demuth, and others. We survey a number of recent results showing that classical almost everywhere convergence theorems can be used to characterize many of the common randomness notions including Schnorr randomness, computable randomness, and Martin-Löf randomness. Last, we go into more detail about computable measure theory, showing how all the major approaches are basically equivalent (even though the definitions can vary greatly).
In this survey, we lay out the central results in the study of algorithmic randomness with respect to biased probability measures. The first part of the survey covers biased randomness with respect to computable measures. The central technique in this area is the transformation of random sequences via certain randomness-preserving Turing functionals, which can be used to induce non-uniform probability measures. The second part of the survey covers biased randomness with respect to non-computable measures, with an emphasis on the work of Reimann and Slaman on the topic, as well as the contributions of Miller and Day in developing Levin's notion of a neutral measure. We also discuss blind randomness as well as van Lambalgen's theorem for both computable and non-computable measures. As there is no currently-available source covering all of these topics, this survey fills a notable gap in the algorithmic randomness literature.
Ergodic theory is concerned with dynamical systems -- collections of points together with a rule governing how the system changes over time. Much of the theory is concerned with the long term behavior of typical points-- how points behave over time, ignoring anomalous behavior from a small number of exceptional points. Computability theory has a family of precise notions of randomness: a point is "algorithmically random'' if no computable test can demonstrate that it is not random. These notions capture something essential about the informal notion of randomness: algorithmically random points are precisely the ones that have typical orbits in computable dynamical systems. For computable dynamical systems with or without assumptions of ergodicity, the measure 0 set of exceptional points for various theorems (such as Poincaré's Recurrence Theorem or the pointwise ergodic theorem) are precisely the Schnorr or Martin-Löf random points identified in algorithmic randomness.
We discuss the different contexts in which relativization occurs in randomness and the effect that the relativization chosen has on the results we can obtain. We study several characterizations of the K-trivials in terms of concepts ranging from cuppability to density, and we consider a uniform relativization for randomness that gives us more natural results for computable randomness, Schnorr randomness, and Kurtz randomness than the classical relativization does (the relativization for Martin-Löf randomness is unaffected by this change). We then evaluate the relativizations we have considered and suggest some avenues for further work.