To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Until now, we have considered “static” (in terms of the number of vertices) models of real-world networks only. However, more often, the networks are constructed by some random “dynamic” process of adding vertices, together with some new edges connecting those vertices with the already existing network. To model such networks is quite challenging and needs specific models of random graphs, possessing properties observed in a real-world network. One such property is that often the degree sequence exhibits a tail that decays polynomially, as opposed to classical random graphs, whose tails decay (super)exponentially. Grasping this property led to the development of, so-called, preferential attachment models. After the presentation of basic properties of the preferential attachment model, we conclude the first section with a brief discussion of its application to study the spread of infection through a network, called bootstrap percolation. The last section of this chapter is devoted to a generalization of the preferential attachment model, called spatial preferential attachment.
A graph is an intersection graph if we assign to each vertex a set from some family S so that there is an edge between two of its vertices when respective sets intersect. Depending on the choice of family S, often reflecting some geometric configuration, one can consider, for example, interval graphs defined as the intersection graphs of intervals on the real line, unit disk graphs defined as the intersection graphs of unit disks on the plane, etc. In this chapter, we will discuss properties of random intersection graphs, where the family S is generated in a random manner. In this chapter, we discuss the properties of binomial intersection random graphs and random geometric graphs.
In this chapter, we consider a generalization of the classic random graph, where the probability of edge {i,j} is not the same for all pairs {i,j}. We call this the generalized binomial graph. Our main result on this model concerns the probability that it is connected. After this, we move onto a special case of this model, namely the expected degree model introduced by Chung and Lu. Here, edge probabilities are proportional to the weights of their endpoints. In this model, we prove results about the size of the largest components. The final section introduces a tool, called the configuration model, to generate a close approximation of a random graph with a fixed degree sequence.
There are many cases in which we put weights on the edges of a graph or digraph and ask for the minimum or maximum weight object. The optimization questions that arise from this are the backbone of Combinatorial optimization. When the weights are random variables, we can ask for properties of the optimum value, which will be also a random variable. In this chapter, we consider three of the most basic optimization problems: minimum weight spanning trees, shortest paths, and minimum weight matchings.
In this chapter, we describe the main goal of the book, its organization, course outline, and suggestions for instructions and self-study. The textbook material is aimed for a one-semester undergraduate/graduate course for mathematics and computer science students. The course might also be recommended for students of physics, interested in networks and the evolution of large systems, as well as engineering students, specializing in telecommunication. Our textbook aims to give a gentle introduction to the mathematical foundations of random graphs and to build a platform to understand the nature of real-life networks. The text is divided into three parts and presents the basic elements of the theory of random graphs and networks. To help the reader navigate through the text, we have decided to start with describing in the preliminary part (Part I) the main technical tools used throughout the text. Part II of the text is devoted to the classic Erdős–Rényi–Gilbert uniform and binomial random graphs. Part III concentrates on generalizations of the Erdős–Rényi–Gilbert models of random graphs whose features better reflect some characteristic properties of real-world networks.
Whether a graph is connected, i.e., there is a path between any two of its vertices, is of particular importance. Therefore, in this chapter, we first establish the threshold for the connectivity of a random graph. We then view this property in terms of the graph process and show that w.h.p. the random graph becomes connected at precisely the time when the last isolated vertex joins the giant component. This “hitting time” result is the precursor to several similar results. After this, we deal with k-connectivity, i.e., the parameter that measures the strength of connectivity of a graph. We show that the threshold for this property is the same as for the existence of vertices of degree k in a random graph.
The previous chapter dealt with the existence of small subgraphs of a fixed size. In this chapter, we concern ourselves with the existence of large subgraphs, most notably perfect matchings and Hamilton cycles. Having dealt with perfect matchings, we turn our attention to long paths in sparse random graphs, i.e., in those where we expect a linear number of edges. We next study one of the most celebrated and difficult problems of random graphs: the existence of a Hamilton cycle in a random graph. In the last section of this chapter, we consider the general problem of the existence of arbitrary spanning subgraphs in a random graph
In this chapter, we mainly explore how the typical component structure evolves as the number of edges m increases. The following statements should be qualified with the caveat, w.h.p. The evolution of Erdős–Rényi–Gilbert type random graphs has clearly distinguishable phases. The first phase, at the beginning of the evolution, can be described as a period when a random graph is a collection of small components which are mostly trees. Next, a random graph passes through a phase transition phase when a giant component, of order comparable with the order of random graph, starts to emerge.
Large real-world networks although being globally sparse, in terms of the number of edges, have their nodes/vertices connected by relatively short paths. In addition, such networks are locally dense, i.e., vertices lying in a small neighborhood of a given vertex are connected by many edges. This observation is called the “small-world” phenomenon, and it has generated many attempts, both theoretical and experimental, to build and study appropriate models of small-world networks. The first attempt to explain this phenomenon and to build a more realistic model was introduced by Watts and Strogatz in 1998 followed by the publication of an alternative approach by Kleinberg in 2000. The current chapter is devoted to the presentation of both models.
In this chapter, we look first at the diameter of random graphs, i.e., the extreme value of the shortest distance between a pair of vertices. Then we look at the size of the largest independent set and the related value of the chromatic number. One interesting feature of these parameters is that they are often highly concentrated.
Networks surround us, from social networks to protein–protein interaction networks within the cells of our bodies. The theory of random graphs provides a necessary framework for understanding their structure and development. This text provides an accessible introduction to this rapidly expanding subject. It covers all the basic features of random graphs – component structure, matchings and Hamilton cycles, connectivity and chromatic number – before discussing models of real-world networks, including intersection graphs, preferential attachment graphs and small-world models. Based on the authors' own teaching experience, it can be used as a textbook for a one-semester course on random graphs and networks at advanced undergraduate or graduate level. The text includes numerous exercises, with a particular focus on developing students' skills in asymptotic analysis. More challenging problems are accompanied by hints or suggestions for further reading.
Given partially ordered sets (posets) $(P, \leq _P\!)$ and $(P^{\prime}, \leq _{P^{\prime}}\!)$, we say that $P^{\prime}$ contains a copy of $P$ if for some injective function $f\,:\, P\rightarrow P^{\prime}$ and for any $X, Y\in P$, $X\leq _P Y$ if and only if $f(X)\leq _{P^{\prime}} f(Y)$. For any posets $P$ and $Q$, the poset Ramsey number $R(P,Q)$ is the least positive integer $N$ such that no matter how the elements of an $N$-dimensional Boolean lattice are coloured in blue and red, there is either a copy of $P$ with all blue elements or a copy of $Q$ with all red elements. We focus on a poset Ramsey number $R(P, Q_n)$ for a fixed poset $P$ and an $n$-dimensional Boolean lattice $Q_n$, as $n$ grows large. We show a sharp jump in behaviour of this number as a function of $n$ depending on whether or not $P$ contains a copy of either a poset $V$, that is a poset on elements $A, B, C$ such that $B\gt C$, $A\gt C$, and $A$ and $B$ incomparable, or a poset $\Lambda$, its symmetric counterpart. Specifically, we prove that if $P$ contains a copy of $V$ or $\Lambda$ then $R(P, Q_n) \geq n +\frac{1}{15} \frac{n}{\log n}$. Otherwise $R(P, Q_n) \leq n + c(P)$ for a constant $c(P)$. This gives the first non-marginal improvement of a lower bound on poset Ramsey numbers and as a consequence gives $R(Q_2, Q_n) = n + \Theta \left(\frac{n}{\log n}\right)$.
Random walks on graphs are an essential primitive for many randomised algorithms and stochastic processes. It is natural to ask how much can be gained by running $k$ multiple random walks independently and in parallel. Although the cover time of multiple walks has been investigated for many natural networks, the problem of finding a general characterisation of multiple cover times for worst-case start vertices (posed by Alon, Avin, Koucký, Kozma, Lotker and Tuttle in 2008) remains an open problem. First, we improve and tighten various bounds on the stationary cover time when $k$ random walks start from vertices sampled from the stationary distribution. For example, we prove an unconditional lower bound of $\Omega ((n/k) \log n)$ on the stationary cover time, holding for any $n$-vertex graph $G$ and any $1 \leq k =o(n\log n )$. Secondly, we establish the stationary cover times of multiple walks on several fundamental networks up to constant factors. Thirdly, we present a framework characterising worst-case cover times in terms of stationary cover times and a novel, relaxed notion of mixing time for multiple walks called the partial mixing time. Roughly speaking, the partial mixing time only requires a specific portion of all random walks to be mixed. Using these new concepts, we can establish (or recover) the worst-case cover times for many networks including expanders, preferential attachment graphs, grids, binary trees and hypercubes.
For a random binary noncoalescing feedback shift register of width $n$, with all $2^{2^{n-1}}$ possible feedback functions $f$ equally likely, the process of long cycle lengths, scaled by dividing by $N=2^n$, converges in distribution to the same Poisson–Dirichlet limit as holds for random permutations in $\mathcal{S}_N$, with all $N!$ possible permutations equally likely. Such behaviour was conjectured by Golomb, Welch and Goldstein in 1959.