To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the Erdős–Rényi random graph Gn,p, each pair of vertices is connected by an edge with probability p. We describe the emergence of the giant component when pn ≈ 1, and identify the density of this component as the survival probability of a Poisson branching process. The Hoeffding inequality may be used to show that, for constant p, the chromatic number of Gn,p is asymptotic to ½ n/logπn, where π = 1/(1 – p).
Erdős–Rényi graphs
Let V = {1, 2, …, n}, and let (Xi,j : 1 ≤ i < j ≤ n) be independent Bernoulli random variables with parameter p. For each pair i < j, we place an edge 〈i, j〉 between vertices i and j if and only if Xi,j = 1. The resulting random graph is named after Erdős and Rényi, and it is commonly denoted Gn,p. The density p of edges may vary with n, for example, p = λ/n with λ ∈ (0, ∞), and one commonly considers the structure of Gn,p in the limit as n → ∞.
The original motivation for studying Gn,p was to understand the properties of ‘typical’ graphs. This is in contrast to the study of ‘extremal’ graphs, although it may be noted that random graphs have on occasion manifested properties more extreme than graphs obtained by more constructive means.
Random graphs have proved an important tool in the study of the ‘typical’ runtime of algorithms.
The contact, voter, and exclusion models are Markov processes in continuous time with state space {0, 1}V for some countable set V. In the voter model, each element of V may be in either of two states, and its state flips at a rate that is a weighted average of the states of the other elements. Its analysis hinges on the recurrence or transience of an associated Markov chain. When V = ℤ2 and the model is generated by simple random walk, the only invariant measures are the two point masses on the (two) states representing unanimity. The picture is more complicated when d ≥ 3. In the exclusion model, a set of particles moves about V according to a ‘symmetric’ Markov chain, subject to exclusion. When V = ℤd and the Markov chain is translation-invariant, the product measures are invariant for this process, and furthermore these are exactly the extremal invariant measures. The chapter closes with a brief account of the stochastic Ising model.
Introductory remarks
There are many beautiful problems of physical type that may be modelled as Markov processes on the compact state space = {0, 1}V for some countable set V. Amongst the most studied to date by probabilists are the contact, voter, and exclusion models, and the stochastic Ising model.
The subcritical and supercritical phases of percolation are characterized respectively by the absence and presence of an infinite open cluster. Connection probabilities decay exponentially when p < pc, and there is a unique infinite cluster when p > pc. There is a power-law singularity at the point of phase transition. It is shown that pc = ½ for bond percolation on the square lattice. The Russo–Seymour–Welsh (RSW) method is described for site percolation on the triangular lattice, and this leads to a statement and proof of Cardy's formula.
Subcritical phase
In language borrowed from the theory of branching processes, a percolation process is termed subcritical if p < pc, and supercritical if p > pc.
In the subcritical phase, all open clusters are (almost surely) finite. The chance of a long-range connection is small, and it approaches zero as the distance between the endpoints diverges. The process is considered to be ‘disordered’, and the probabilities of long-range connectivities tend to zero exponentially in the distance. Exponential decay may be proved by elementary means for sufficiently small p, as in the proof of Theorem 3.2, for example. It is quite another matter to prove exponential decay for all p < pc, and this was achieved for percolation by Aizenman and Barsky and Menshikov around 1986.
Random walk – the stochastic process formed by successive summation of independent, identically distributed random variables – is one of the most basic and well-studied topics in probability theory. For random walks on the integer lattice ℤd, the main reference is the classic book by Spitzer (1976). This text considers only a subset of such walks, namely those corresponding to increment distributions with zero mean and finite variance. In this case, one can summarize the main result very quickly: the central limit theorem implies that under appropriate rescaling the limiting distribution is normal, and the functional central limit theorem implies that the distribution of the corresponding path-valued process (after standard rescaling of time and space) approaches that of Brownian motion.
Researchers who work with perturbations of random walks, or with particle systems and other models that use random walks as a basic ingredient, often need more precise information on random walk behavior than that provided by the central limit theorems. In particular, it is important to understand the size of the error resulting from the approximation of random walk by Brownian motion. For this reason, there is need for more detailed analysis. This book is an introduction to the random walk theory with an emphasis on the error estimates. Although “mean zero, finite variance” assumption is both necessary and sufficient for normal convergence, one typically needs to make stronger assumptions on the increments of the walk in order to obtain good bounds on the error terms.