To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we draw motivation from real-world networks and formulate random graph models for them. We focus on some of the models that have received the most attention in the literature, namely, Erdos–Rényi random graphs, inhomogeneous random graphs, configuration models, and preferential attachment models. We follow Volume 1, both for the motivation as well as for the introduction of the random graph models involved. Furthermore, we add some convenient additional results, such as degree-truncation for configuration models and switching techniques for uniform random graphs with prescribed degrees. We also discuss preliminaries used in the book, for example concerning power-law distributions.
In this chapter we investigate the small-world structure in rank-1 and general inhomogeneous random graphs. For this, we develop path-counting techniques that are interesting in their own right.
In this chapter we discuss local convergence, which describes the intuitive notion that a finite graph, seen from the perspective of a typical vertex, looks like a certain limiting graph. Local convergence plays a profound role in random graph theory. We give general definitions of local convergence in several probabilistic senses. We then show that local convergence in its various forms is equivalent to the appropriate convergence of subgraph counts. We continue by discussing several implications of local convergence, concerning local neighborhoods, clustering, assortativity, and PageRank. We further investigate the relation between local convergence and the size of the giant, making the statement that the giant is “almost local” precise.
In this chapter we investigate the local limit of the configuration model, we identify when it has a giant component and find its size and degree structure. We give two proofs, one based on a “the giant is almost local” argument, and another based on a continuous-time exploration of the connected components in the configuration model. Further results include its connectivity transition.
In this chapter we introduce the general setting of inhomogeneous random graphs that are generalizations of the Erdos–Rényi and generalized random graphs. In inhomogeneous random graphs, the status of edges is independent with unequal edge-occupation probabilities. While these edge probabilities are moderated by vertex weights in generalized random graphs, in the general setting they are described in terms of a kernel. The main results in this chapter concern the degree structure, the multi-type branching process local limits, and the phase transition in these inhomogeneous random graphs. We also discuss various examples, and indicate that they can have rather different structure.
In this chapter we discuss some related random graph models that have been studied in the literature. We explain their relevance, as well as some of the properties in them. We discuss directed random graphs, random graphs with local and global community structures, as well as spatial random graphs.
In mathematics, it simply is not true that “you can’t prove a negative.” Many revolutionary impossibility theorems reveal profound properties of logic, computation, fairness, and the universe and form the mathematical background of new technologies and Nobel prizes. But to fully appreciate these theorems and their impact on mathematics and beyond, you must understand their proofs.
This book is the first to present complete proofs of these theorems for a broad, lay audience. It fully develops the simplest rigorous proofs found in the literature, reworked to contain less jargon and notation, and more background, intuition, examples, explanations, and exercises. Amazingly, all of the proofs in this book involve only arithmetic and basic logic – and are elementary, starting only from first principles and definitions.
Very little background knowledge is required, and no specialized mathematical training – all you need is the discipline to follow logical arguments and a pen in your hand.
In mathematics, it simply is not true that “you can’t prove a negative.” Many revolutionary impossibility theorems reveal profound properties of logic, computation, fairness, and the universe and form the mathematical background of new technologies and Nobel prizes. But to fully appreciate these theorems and their impact on mathematics and beyond, you must understand their proofs.
This book is the first to present complete proofs of these theorems for a broad, lay audience. It fully develops the simplest rigorous proofs found in the literature, reworked to contain less jargon and notation, and more background, intuition, examples, explanations, and exercises. Amazingly, all of the proofs in this book involve only arithmetic and basic logic – and are elementary, starting only from first principles and definitions.
Very little background knowledge is required, and no specialized mathematical training – all you need is the discipline to follow logical arguments and a pen in your hand.
In this chapter, we look at the moments of a random variable. Specifically we demonstrate that moments capture useful information about the tail of a random variable while often being simpler to compute or at least bound. Several well-known inequalities quantify this intuition. Although they are straightforward to derive, such inequalities are surprisingly powerful. Through a range of applications, we illustrate the utility of controlling the tail of a random variable, typically by allowing one to dismiss certain “bad events” as rare. We begin by recalling the classical Markov and Chebyshev’s inequalities. Then we discuss three of the most fundamental tools in discrete probability and probabilistic combinatorics. First, we derive the complementary first and second moment methods, and give several standard applications, especially to threshold phenomena in random graphs and percolation. Then we develop the Chernoff–Cramer method, which relies on the “exponential moment” and is the building block for large deviations bounds. Two key applications in data science are briefly introduced: sparse recovery and empirical risk minimization.
In this chapter, we move on to coupling, another probabilistic technique with a wide range of applications (far beyond discrete stochastic processes). The idea behind the coupling method is deceptively simple: to compare two probability measures, it is sometimes useful to construct a joint probability space with the corresponding marginals. We begin by defining coupling formally and deriving its connection to the total variation distance through the coupling inequality. We illustrate the basic idea on a classical Poisson approximation result, which we apply to the degree sequence of an Erdos–Renyi graph. Then we introduce the concept of stochastic domination and some related correlation inequalities. We develop a key application in percolation theory. Coupling of Markov chains is the next topic, where it serves as a powerful tool to derive mixing time bounds. Finally, we end with the Chen–Stein method for Poisson approximation, a technique that applies in particular in some natural settings with dependent variables.
In this chapter, we develop spectral techniques. We highlight some applications to Markov chain mixing and network analysis. The main tools are the spectral theorem and the variational characterization of eigenvalues, which we review together with some related results. We also give a brief introduction to spectral graph theory and detail an application to community recovery. Then we apply the spectral theorem to reversible Markov chains. In particular we define the spectral gap and establish its close relationship to the mixing time. We also show in that the spectral gap can be bounded using certain isoperimetric properties of the underlying network. We prove Cheeger’s inequality, which quantifies this relationship, and introduce expander graphs, an important family of graphs with good “expansion.” Applications to mixing times are also discussed. One specific technique is the “canonical paths method,” which bounds the spectral graph by formalizing a notion of congestion in the network.