To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter provides an introduction to the study of extremal problems in graph theory, beginning with the classical theorem of Turán. We next turn to bipartite graphs, beginning with trees and paths, and then proving upper bounds for complete bipartite graphs and lower bounds for even cycles. In the process, we take the opportunity to introduce the reader to the Erdős–Rényi random graph G(n, p), which is the central topic of Chapter 5, and also to the fundamental techniques of rotation-extension, double-counting using convexity, and the alteration method, using the inequalities of Markov and Chebyshev. In the second half of the chapter we introduce the notions of supersaturation and stability, which both play key roles in modern research, and prove the Erdős–Stone theorem, often called the fundamental theorem of extremal graph theory, in the case χ(H) = 3.
In this final chapter we study in more detail the properties of the Erdős–Rényi random graph G(n, p). The first half of the chapter introduces the concept of a threshold, covers fundamental results such as the threshold for containing a fixed subgraph and for being connected, and gives Erdős’ classic probabilistic construction of graphs with high girth and chromatic number. The second half of the chapter, which is aimed at Masters students, covers some more advanced material, including the problem of finding spanning subgraphs in G(n, p), the threshold for Hamiltonicity, and the emergence of the giant component. In particular, the final two sections provide striking examples of the power of pseudorandomness.
Over the past few decades, graph theory has developed into one of the central areas of modern mathematics, with close (and growing) connections to areas of pure mathematics such as number theory, probability theory, algebra and geometry, as well as to applied areas such as the theory of networks, machine learning, statistical physics, and biology. It is a young and vibrant area, with several major breakthroughs having occurred in just the past few years. This book offers the reader a gentle introduction to the fundamental concepts and techniques of graph theory, covering classical topics such as matchings, colourings and connectivity, alongside the modern and vibrant areas of extremal graph theory, Ramsey theory, and random graphs. The focus throughout is on beautiful questions, ideas and proofs, and on illustrating simple but powerful techniques, such as the probabilistic method, that should be part of every young mathematician's toolkit.
This chapter introduces sub-Gaussian and sub-exponential distributions and develops basic concentration inequalities. We prove the Hoeffding, Chernoff, Bernstein, and Khintchine inequalities. Applications include robust mean estimation and analyzing degrees in random graphs. The exercises explore Mills ratio, small ball probabilities, Le Cam’s two-point method, the expander mixing lemma for random graphs, stochastic dominance, Orlicz norms, and the Bennett inequality.
It is well known that almost all graphs are canonizable by a simple combinatorial routine known as colour refinement, also referred to as the 1-dimensional Weisfeiler–Leman algorithm. With high probability, this method assigns a unique label to each vertex of a random input graph and, hence, it is applicable only to asymmetric graphs. The strength of combinatorial refinement techniques becomes a subtle issue if the input graphs are highly symmetric. We prove that the combination of colour refinement and vertex individualization yields a canonical labelling for almost all circulant digraphs (i.e., Cayley digraphs of a cyclic group). This result provides first evidence of good average-case performance of combinatorial refinement within the class of vertex-transitive graphs. Remarkably, we do not even need the full power of the colour refinement algorithm. We show that the canonical label of a vertex $v$ can be obtained just by counting walks of each length from $v$ to an individualized vertex. Our analysis also implies that almost all circulant graphs are compact in the sense of Tinhofer, that is, their polytops of fractional automorphisms are integral. Finally, we show that a canonical Cayley representation can be constructed for almost all circulant graphs by the more powerful 2-dimensional Weisfeiler–Leman algorithm.
We consider the random series–parallel graph introduced by Hambly and Jordan (2004 Adv. Appl. Probab.36, 824–838), which is a hierarchical graph with a parameter $p\in [0, \, 1]$. The graph is built recursively: at each step, every edge in the graph is either replaced with probability p by a series of two edges, or with probability $1-p$ by two parallel edges, and the replacements are independent of each other and of everything up to then. At the nth step of the recursive procedure, the distance between the extremal points on the graph is denoted by $D_n (p)$. It is known that $D_n(p)$ possesses a phase transition at $p=p_c \;:\!=\;\frac{1}{2}$; more precisely, $\frac{1}{n}\log {{\mathbb{E}}}[D_n(p)] \to \alpha(p)$ when $n \to \infty$, with $\alpha(p) >0$ for $p>p_c$ and $\alpha(p)=0$ for $p\le p_c$. We study the exponent $\alpha(p)$ in the slightly supercritical regime $p=p_c+\varepsilon$. Our main result says that as $\varepsilon\to 0^+$, $\alpha(p_c+\varepsilon)$ behaves like $\sqrt{\zeta(2) \, \varepsilon}$, where $\zeta(2) \;:\!=\; \frac{\pi^2}{6}$.
We consider the problem of sequential matching in a stochastic block model with several classes of nodes and generic compatibility constraints. When the probabilities of connections do not scale with the size of the graph, we show that under the Ncond condition, a simple max-weight type policy allows us to attain an asymptotically perfect matching while no sequential algorithm attains perfect matching otherwise. The proof relies on a specific Markovian representation of the dynamics associated with Lyapunov techniques.
The walk matrix associated to an $n\times n$ integer matrix $\mathbf{X}$ and an integer vector $b$ is defined by ${\mathbf{W}} \,:\!=\, (b,{\mathbf{X}} b,\ldots, {\mathbf{X}}^{n-1}b)$. We study limiting laws for the cokernel of $\mathbf{W}$ in the scenario where $\mathbf{X}$ is a random matrix with independent entries and $b$ is deterministic. Our first main result provides a formula for the distribution of the $p^m$-torsion part of the cokernel, as a group, when $\mathbf{X}$ has independent entries from a specific distribution. The second main result relaxes the distributional assumption and concerns the ${\mathbb{Z}}[x]$-module structure.
The motivation for this work arises from an open problem in spectral graph theory, which asks to show that random graphs are often determined up to isomorphism by their (generalised) spectrum. Sufficient conditions for generalised spectral determinacy can, namely, be stated in terms of the cokernel of a walk matrix. Extensions of our results could potentially be used to determine how often those conditions are satisfied. Some remaining challenges for such extensions are outlined in the paper.
The $d$-process generates a graph at random by starting with an empty graph with $n$ vertices, then adding edges one at a time uniformly at random among all pairs of vertices which have degrees at most $d-1$ and are not mutually joined. We show that, in the evolution of a random graph with $n$ vertices under the $d$-process with $d$ fixed, with high probability, for each $j \in \{0,1,\dots,d-2\}$, the minimum degree jumps from $j$ to $j+1$ when the number of steps left is on the order of $\ln (n)^{d-j-1}$. This answers a question of Ruciński and Wormald. More specifically, we show that, when the last vertex of degree $j$ disappears, the number of steps left divided by $\ln (n)^{d-j-1}$ converges in distribution to the exponential random variable of mean $\frac{j!}{2(d-1)!}$; furthermore, these $d-1$ distributions are independent.
In this note, we give a precise description of the limiting empirical spectral distribution for the non-backtracking matrices for an Erdős-Rényi graph $G(n,p)$ assuming $np/\log n$ tends to infinity. We show that derandomizing part of the non-backtracking random matrix simplifies the spectrum considerably, and then, we use Tao and Vu’s replacement principle and the Bauer-Fike theorem to show that the partly derandomized spectrum is, in fact, very close to the original spectrum.
We introduce a broad class of multi-hooking networks, wherein multiple copies of a seed are hooked at each step at random locations, and the number of copies follows a predetermined building sequence of numbers. We analyze the degree profile in random multi-hooking networks by tracking two kinds of node degrees—the local average degree of a specific node over time and the global overall average degree in the graph. The former experiences phases and the latter is invariant with respect to the type of building sequence and is somewhat similar to the average degree in the initial seed. We also discuss the expected number of nodes of the smallest degree. Additionally, we study distances in the network through the lens of the average total path length, the average depth of a node, the eccentricity of a node, and the diameter of the graph.
This paper studies the magnitude homology of graphs focusing mainly on the relationship between its diagonality and the girth. The magnitude and magnitude homology are formulations of the Euler characteristic and the corresponding homology, respectively, for finite metric spaces, first introduced by Leinster and Hepworth–Willerton. Several authors study them restricting to graphs with path metric, and some properties which are similar to the ordinary homology theory have come to light. However, the whole picture of their behaviour is still unrevealed, and it is expected that they catch some geometric properties of graphs. In this article, we show that the girth of graphs partially determines the magnitude homology, that is, the larger girth a graph has, the more homologies near the diagonal part vanish. Furthermore, applying this result to a typical random graph, we investigate how the diagonality of graphs varies statistically as the edge density increases. In particular, we show that there exists a phase transition phenomenon for the diagonality.
Many classic networks grow by hooking small components via vertices. We introduce a class of networks that grows by fusing the edges of a small graph to an edge chosen uniformly at random from the network. For this random edge-hooking network, we study the local degree profile, that is, the evolution of the average degree of a vertex over time. For a special subclass, we further determine the exact distribution and an asymptotic gamma-type distribution. We also study the “core,” which consists of the well-anchored edges that experience fusing. A central limit theorem emerges for the size of the core.
At the end, we look at an alternative model of randomness attained by preferential hooking, favoring edges that experience more fusing. Under preferential hooking, the core still follows a Gaussian law but with different parameters. Throughout, Pólya urns are systematically used as a method of proof.
The study of threshold functions has a long history in random graph theory. It is known that the thresholds for minimum degree k, k-connectivity, as well as k-robustness coincide for a binomial random graph. In this paper we consider an inhomogeneous random graph model, which is obtained by including each possible edge independently with an individual probability. Based on an intuitive concept of neighborhood density, we show two sufficient conditions guaranteeing k-connectivity and k-robustness, respectively, which are asymptotically equivalent. Our framework sheds some light on extending uniform threshold values in homogeneous random graphs to threshold landscapes in inhomogeneous random graphs.
We consider the near-critical Erdős–Rényi random graph G(n, p) and provide a new probabilistic proof of the fact that, when p is of the form $p=p(n)=1/n+\lambda/n^{4/3}$ and A is large,
where $\mathcal{C}_{\max}$ is the largest connected component of the graph. Our result allows A and $\lambda$ to depend on n. While this result is already known, our proof relies only on conceptual and adaptable tools such as ballot theorems, whereas the existing proof relies on a combinatorial formula specific to Erdős–Rényi graphs, together with analytic estimates.
Reaction networks are commonly used within the mathematical biology and mathematical chemistry communities to model the dynamics of interacting species. These models differ from the typical graphs found in random graph theory since their vertices are constructed from elementary building blocks, i.e. the species. We consider these networks in an Erdös–Rényi framework and, under suitable assumptions, derive a threshold function for the network to have a deficiency of zero, which is a property of great interest in the reaction network community. Specifically, if the number of species is denoted by n and the edge probability by $p_n$, then we prove that the probability of a random binary network being deficiency zero converges to 1 if $p_n\ll r(n)$ as $n \to \infty$, and converges to 0 if $p_n \gg r(n)$ as $n \to \infty$, where $r(n)=\frac{1}{n^3}$.
This paper studies estimation of stochastic block models with Rissanen’s minimum description length (MDL) principle in the dense graph asymptotics. We focus on the problem of model specification, i.e., identification of the number of blocks. Refinements of the true partition always decrease the code part corresponding to the edge placement, and thus a respective increase of the code part specifying the model should overweight that gain in order to yield a minimum at the true partition. The balance between these effects turns out to be delicate. We show that the MDL principle identifies the true partition among models whose relative block sizes are bounded away from zero. The results are extended to models with Poisson-distributed edge weights.
A hooking network is built by stringing together components randomly chosen from a set of building blocks (graphs with hooks). The vertices are endowed with “affinities” which dictate the attachment mechanism. We study the distance from the master hook to a node in the network chosen according to its affinity after many steps of growth. Such a distance is commonly called the depth of the chosen node. We present an exact average result and a rather general central limit theorem for the depth. The affinity model covers a wide range of attachment mechanisms, such as uniform attachment and preferential attachment, among others. Naturally, the limiting normal distribution is parametrized by the structure of the building blocks and their probabilities. We also take the point of view of a visitor uninformed about the affinity mechanism by which the network is built. To explore the network, such a visitor chooses the nodes uniformly at random. We show that the distance distribution under such a uniform choice is similar to the one under random choice according to affinities.
We revisit an old topic in algorithms, the deterministic walk on a finite graph which always moves toward the nearest unvisited vertex until every vertex is visited. There is an elementary connection between this cover time and ball-covering (metric entropy) measures. For some familiar models of random graphs, this connection allows the order of magnitude of the cover time to be deduced from first passage percolation estimates. Establishing sharper results seems a challenging problem.
Spatial random graphs capture several important properties of real-world networks. We prove quenched results for the continuous-space version of scale-free percolation introduced in [14]. This is an undirected inhomogeneous random graph whose vertices are given by a Poisson point process in $\mathbb{R}^d$. Each vertex is equipped with a random weight, and the probability that two vertices are connected by an edge depends on their weights and on their distance. Under suitable conditions on the parameters of the model, we show that, for almost all realizations of the point process, the degree distributions of all the nodes of the graph follow a power law with the same tail at infinity. We also show that the averaged clustering coefficient of the graph is self-averaging. In particular, it is almost surely equal to the annealed clustering coefficient of one point, which is a strictly positive quantity.