To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We demonstrate a quasipolynomial-time deterministic approximation algorithm for the partition function of a Gibbs point process interacting via a stable potential. This result holds for all activities $\lambda$ for which the partition function satisfies a zero-free assumption in a neighbourhood of the interval $[0,\lambda ]$. As a corollary, for all finiterange stable potentials, we obtain a quasipolynomial-time deterministic algorithm for all $\lambda \lt 1/(e^{B + 1} \hat C_\phi )$ where $\hat C_\phi$ is a temperedness parameter and $B$ is the stability constant of $\phi$. In the special case of a repulsive potential such as the hard-sphere gas we improve the range of activity by a factor of at least $e^2$ and obtain a quasipolynomial-time deterministic approximation algorithm for all $\lambda \lt e/\Delta _\phi$, where $\Delta _\phi$ is the potential-weighted connective constant of the potential $\phi$. Our algorithm approximates coefficients of the cluster expansion of the partition function and uses the interpolation method of Barvinok to extend this approximation throughout the zero-free region.
Using the dichotomy of structure and pseudorandomness as a central theme, this accessible text provides a modern introduction to extremal graph theory and additive combinatorics. Readers will explore central results in additive combinatorics-notably the cornerstone theorems of Roth, Szemerédi, Freiman, and Green-Tao-and will gain additional insights into these ideas through graph theoretic perspectives. Topics discussed include the Turán problem, Szemerédi's graph regularity method, pseudorandom graphs, graph limits, graph homomorphism inequalities, Fourier analysis in additive combinatorics, the structure of set addition, and the sum-product problem. Important combinatorial, graph theoretic, analytic, Fourier, algebraic, and geometric methods are highlighted. Students will appreciate the chapter summaries, many figures and exercises, and freely available lecture videos on MIT OpenCourseWare. Meant as an introduction for students and researchers studying combinatorics, theoretical computer science, analysis, probability, and number theory, the text assumes only basic familiarity with abstract algebra, analysis, and linear algebra.
We introduce a formula for translating any upper bound on the percolation threshold of a lattice $G$ into a lower bound on the exponential growth rate of lattice animals $a(G)$ and vice versa. We exploit this in both directions. We obtain the rigorous lower bound ${\dot{p}_c}({\mathbb{Z}}^3)\gt 0.2522$ for 3-dimensional site percolation. We also improve on the best known asymptotic bounds on $a({\mathbb{Z}}^d)$ as $d\to \infty$. Our formula remains valid if instead of lattice animals we enumerate certain subspecies called interfaces. Enumerating interfaces leads to functional duality formulas that are tightly connected to percolation and are not valid for lattice animals, as well as to strict inequalities for the percolation threshold.
Incidentally, we prove that the rate of the exponential decay of the cluster size distribution of Bernoulli percolation is a continuous function of $p\in (0,1)$.
In this note, we give a precise description of the limiting empirical spectral distribution for the non-backtracking matrices for an Erdős-Rényi graph $G(n,p)$ assuming $np/\log n$ tends to infinity. We show that derandomizing part of the non-backtracking random matrix simplifies the spectrum considerably, and then, we use Tao and Vu’s replacement principle and the Bauer-Fike theorem to show that the partly derandomized spectrum is, in fact, very close to the original spectrum.
A graph is called $k$-critical if its chromatic number is $k$ but every proper subgraph has chromatic number less than $k$. An old and important problem in graph theory asks to determine the maximum number of edges in an $n$-vertex $k$-critical graph. This is widely open for every integer $k\geq 4$. Using a structural characterisation of Greenwell and Lovász and an extremal result of Simonovits, Stiebitz proved in 1987 that for $k\geq 4$ and sufficiently large $n$, this maximum number is less than the number of edges in the $n$-vertex balanced complete $(k-2)$-partite graph. In this paper, we obtain the first improvement in the above result in the past 35 years. Our proofs combine arguments from extremal graph theory as well as some structural analysis. A key lemma we use indicates a partial structure in dense $k$-critical graphs, which may be of independent interest.
In this paper we study a variation of the random $k$-SAT problem, called polarised random $k$-SAT, which contains both the classical random $k$-SAT model and the random version of monotone $k$-SAT another well-known NP-complete version of SAT. In this model there is a polarisation parameter $p$, and in half of the clauses each variable occurs negated with probability $p$ and pure otherwise, while in the other half the probabilities are interchanged. For $p=1/2$ we get the classical random $k$-SAT model, and at the other extreme we have the fully polarised model where $p=0$, or 1. Here there are only two types of clauses: clauses where all $k$ variables occur pure, and clauses where all $k$ variables occur negated. That is, for $p=0$, and $p=1$, we get an instance of random monotone$k$-SAT.
We show that the threshold of satisfiability does not decrease as $p$ moves away from $\frac{1}{2}$ and thus that the satisfiability threshold for polarised random $k$-SAT with $p\neq \frac{1}{2}$ is an upper bound on the threshold for random $k$-SAT. Hence the satisfiability threshold for random monotone $k$-SAT is at least as large as for random $k$-SAT, and we conjecture that asymptotically, for a fixed $k$, the two thresholds coincide.
Given $\alpha \gt 0$ and an integer $\ell \geq 5$, we prove that every sufficiently large $3$-uniform hypergraph $H$ on $n$ vertices in which every two vertices are contained in at least $\alpha n$ edges contains a copy of $C_\ell ^{-}$, a tight cycle on $\ell$ vertices minus one edge. This improves a previous result by Balogh, Clemen, and Lidický.
Community detection is one of the most important methodological fields of network science, and one which has attracted a significant amount of attention over the past decades. This area deals with the automated division of a network into fundamental building blocks, with the objective of providing a summary of its large-scale structure. Despite its importance and widespread adoption, there is a noticeable gap between what is arguably the state-of-the-art and the methods which are actually used in practice in a variety of fields. The Elements attempts to address this discrepancy by dividing existing methods according to whether they have a 'descriptive' or an 'inferential' goal. While descriptive methods find patterns in networks based on context-dependent notions of community structure, inferential methods articulate a precise generative model, and attempt to fit it to data. In this way, they are able to provide insights into formation mechanisms and separate structure from noise. This title is also available as open access on Cambridge Core.
We show that the size-Ramsey number of the $\sqrt{n} \times \sqrt{n}$ grid graph is $O(n^{5/4})$, improving a previous bound of $n^{3/2 + o(1)}$ by Clemens, Miralaei, Reding, Schacht, and Taraz.
This chapter addresses a basic integer encoding problem whose impact on the total memory footprint and speed performance of the underlying application is too easily underestimated or neglected. The problem consists of squeezing the space (in bits) required to store an increasing sequence of integers, and then supporting efficient query operations such as decompressing the sequence from the beginning or from some other position, checking whether an integer occurs in the sequence, or finding the smallest integer larger than the queried one. This problem occurs in several common applications, such as in the storage of the posting lists of search engines, or of the adjacency lists of trees and graphs, or of the encoding of sequences of offsets (pointers). The integer coders here discussed, analyzed, and illustrated with many running examples are Elias’ γ- and δ-codes, Rice’s code, PForDelta code, variable-byte code, (s, c)-dense codes, interpolative code, and, finally, the very elegant and powerful Elias–Fano code.
In this chapter, we introduce the complex numbers system – an extension of the well-known real numbers. Complex numbers arise naturally in many problems in mathematics and science and allow us to study polynomial equations that may not have real solutions (such as x2 + 5 = 0). As we will see, many familiar algebraic properties remain valid in the complex number system. In particular, we show that the complex numbers form a field and that the quadratic formula and the triangle inequality can still be used in this new number system.
This chapter addresses a problem related to lists, the basic data structure underlying the design of many algorithms that manage interconnected items. It starts with an easy-to-state but I/O-inefficient solution derived from the optimal one designed for the classic RAM model; it then discusses increasingly sophisticated solutions that are elegant and efficient in the two-level memory model, and are still simple enough to be implemented with a few lines of code. The treatment of this problem will also allow us to highlight a subtle relation between parallel computation and I/O-efficient computation, which can be deployed to derive efficient disk-aware algorithms from efficient parallel algorithms.
This chapter deals with a classic educational problem, called the subarray sum. The specialty of this problem is that it has a simple formulation, which finds other useful variations and applications, and it admits a sequence of algorithmic solutions of increasing sophistication and elegance, which imply a significant reduction in their time and I/O complexities. Te ultimate result is a linear-time and -I/O algorithm, which will allow the reader to enter into the “game” of time and I/O complexity evaluations. The chapter concludes with a discussion of some interesting variations of this problem which arise from computational biology applications, and admit no immediate algorithmic solutions, thus stressing the fact that “ five minutes thinking” is not enough for designing efficient algorithms.