Hostname: page-component-699b5d5946-24ph4 Total loading time: 0 Render date: 2026-03-04T05:08:16.399Z Has data issue: false hasContentIssue false

The largest subcritical component in inhomogeneous random graphs of preferential attachment type

Published online by Cambridge University Press:  04 March 2026

Peter Mörters*
Affiliation:
Department of Mathematics, University of Cologne , Weyertal 86-90, 50931 Köln, Germany
Nick Schleicher
Affiliation:
Department of Mathematics, University of Cologne , Weyertal 86-90, 50931 Köln, Germany
*
Corresponding author: Peter Mörters; Email: moerters@math.uni-koeln.de
Rights & Permissions [Opens in a new window]

Abstract

We identify the size of the largest connected component in a subcritical inhomogeneous random graph with a kernel of preferential attachment type. The component is polynomial in the graph size with an explicitly given exponent, which is strictly larger than the exponent for the largest degree in the graph. This is in stark contrast to the behaviour of inhomogeneous random graphs with a kernel of rank one. Our proof uses local approximation by branching random walks going well beyond the weak local limit and novel results on subcritical killed branching random walks.

Information

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

1. Introduction and main results

Preferential attachment models give a credible explanation how typical features of networks, like scale-free degree distributions and small diameter, arise naturally from the basic construction principle of reinforcement. This makes them a popular model for scale-free random graphs. Unfortunately, the mathematical analysis of preferential attachment networks is much more challenging than that of many other scale-free network models, for example the configuration model. In particular, the problem of the size of the largest subcritical connected component, solved for the configuration model by Janson [Reference Janson7], is open for all model variants of preferential attachment. The purpose of the present paper is to solve this problem for a simplified class of network models of preferential attachment type. We believe that our model, which is an inhomogeneous random graph with a suitably chosen kernel, has sufficiently many common features with the most studied models of preferential attachment networks to serve as a solvable model in this universality class. Since inhomogeneous random graphs are interesting models in their own right, see [Reference Bollobás, Janson and Riordan3], their analysis is also of independent interest.

The class of inhomogeneous random graphs is parametrised by a symmetric kernel

\begin{equation*}\kappa \colon (0,1] \times (0,1] \rightarrow (0, \infty )\end{equation*}

and constructed such that, for each $n \in \mathbb{N}$ , the graph $\mathscr{G}_n$ has vertex set $V_n = \{1, \ldots , n\}$ and edge set $E_n$ containing each unordered pair of distinct vertices $\{i,j\}\subset V_n$ independently with probability

\begin{equation*}p_{ij}^{(n)}= \frac 1n \kappa \left ( \frac {i}{n}, \frac {j}{n} \right ) \wedge 1.\end{equation*}

Our idea is now to choose the kernel $\kappa$ in such a way that the inhomogeneous random graphs mimic the behaviour of preferential attachment models. In preferential attachment models vertices arrive one-by-one and attach themselves to earlier vertices with a probability proportional to their degree. Typically degrees grow polynomially so that, for some $\gamma \ge 0$ , the degree of vertex $i$ at time $j\gt i$ is of order $(j/i)^\gamma$ . For the expected degree of vertex $j$ at its arrival time to remain bounded from zero and infinity we need that $\gamma \lt 1$ and the proportionality factor in the connection probability to be of order $(\!\sum_{i=1}^{j-1} (j/i)^\gamma )^{-1}\approx (1/j)$ . Hence in the preferential attachment models for vertices with index $i\lt j$ we have connection probability $p_{ij}^{_{(n)}} \approx i^{-\gamma }j^{\gamma -1}$ . To get the same connection probabilities in the inhomogeneous random graph we choose the kernel

\begin{equation*}\kappa (x, y) = \beta (x \vee y)^{\gamma - 1} (x \wedge y)^{-\gamma },\end{equation*}

where the parameter $0 \leq \gamma \lt 1$ controls the strength of the preferential attachment, and $\beta \gt 0$ is an edge density parameter. Note that $\kappa$ is homogeneous of index $-1$ and therefore the resulting edge probability $p_{ij}^{_{(n)}}$ is independent of the graph size $n$ . We refer to this model as the inhomogeneous random graph of preferential attachment type.

It is easy to see that the inhomogeneous random graph of preferential attachment type has an asymptotic degree distribution which, for $\gamma \gt 0$ , is heavy-tailed with power-law exponent $\tau =1+\frac 1\gamma$ . The analysis of a preferential attachment model in [Reference Dereich and Mörters5] can be simplified, see [Reference Mörters11] for details, and shows that the size $S_n^{\mathrm{max}}$ of the largest connected component in $\mathscr{G}_n$ satisfies

\begin{equation*}\lim _{n\to \infty } \frac {S_n^{\mathrm{max}}}{n} = \theta (\beta ) \left \{ \begin{array}{ll} \gt 0 & \text{if } \beta \gt \beta _c\ \,:\!=\, \bigg(\dfrac {1}{4}-\dfrac {\gamma }{2}\bigg) \vee 0,\\ =0 & \text{otherwise.}\\ \end{array} \right . \end{equation*}

In this paper, we are interested in the subcritical regime, i.e. we always assume that $\gamma \lt \frac 12$ and $0\lt \beta \lt \beta _c$ . In this case, all component sizes are of smaller order than $n$ . Our first result identifies the component sizes of vertices in a moving observation window. We say that vertex $o_n\in V_n$ is typical if $\frac {o_n}n \to u$ for some $u\gt 0$ and the behaviour of early typical vertices refers to features of $\mathscr{G}_n$ rooted in vertex $o_n$ that hold asymptotically as $u\downarrow 0$ .

Theorem 1 (Early typical vertices). Let $S_n(i)$ be the size of the connected component of vertex $i\in V_n$ in the inhomogeneous random graph of preferential attachment type in the subcritical regime. If $o_n\in V_n$ is such that $\frac {o_n}n\to u\in (0,1]$ , then

\begin{equation*} \lim _{u \downarrow 0} \lim _{n \rightarrow \infty } \mathbb{P}\left ( S_n(o_n) \geq u^{-\rho _-}x \right )= \mathbb{P} \left ( Y\geq x \right)\!,\end{equation*}

for all $x\gt 0$ , where

\begin{equation*}\rho _{\pm } = \frac{1}{2} \pm \sqrt {(\gamma -\tfrac {1}{2})^2+\beta (2\gamma -1)}.\end{equation*}

and $Y$ is a positive random variable satisfying

\begin{equation*}\mathbb{P}\left ( Y \geq x \right ) = x^{-(\rho _+/\rho _-) + o(1)} \text{ as $x\to \infty $.}\end{equation*}

Our second theorem identifies the the size of the component of untypically early vertices. Here a vertex $o_n\in V_n$ is called untypically early if $\frac {o_n}n \to 0$ .

Theorem 2 (Untypically early vertices). Let $o_n\in V_n$ be such that

\begin{equation*} {o_n\to \infty } \text{ and } { \frac {o_n}{n} \to 0}.\end{equation*}

Denoting by $S_n(o_n)$ the size of the component of $o_n$ in $\mathscr{G}_n$ , then for any $\epsilon \gt 0$ we have

\begin{equation*} {\lim _{n \rightarrow \infty } \mathbb{P}\left ( S_n(o_n) \geq (n/o_n)^{\rho _-{-}\epsilon } \right )= 1}.\end{equation*}

The idea behind this result is to exploit a self-similarity feature of graphs of preferential attachment type and leverage Theorem1. Loosely speaking, we find for fixed small $u\gt 0$ a positive integer $k$ with $o_n\approx u^kn$ . Then $o_n$ is early typical in the graph $\mathscr{G}_{u^{k-1}n}$ and by Theorem1 we get a connected component with size of order $u^{-\rho _-}$ . Many vertices in this component are themselves early typical in $\mathscr{G}_{u^{k-2}n}$ and we can use Theorem1 again, getting a component with size of order $u^{-2\rho _-}$ . Continuing the procedure altogether $k$ times we build a component of size $u^{-k\rho _-}\approx (n/o_n)^{\rho _-}$ in $\mathscr{G}_{n}$ .

Theorem 2 gives a lower bound on the size of the largest component. As it describes the size of the components of the most powerful vertices in $\mathscr{G}_n$ it is plausible that this result also gives the right order of the largest component. Our main result confirms this. It is the first result in the mathematical literature identifying the size of the largest subcritical component up to polynomial order for a random graph model of preferential attachment type.

Theorem 3 (Largest subcritical component). Denoting by $S_n^{\mathrm{max}}$ the size of the largest component in $\mathscr{G}_n$ we have

\begin{equation*}\lim _{n \rightarrow \infty } \frac {\log S_n^{\mathrm{max}}}{\log n} = \rho _{-},\end{equation*}

in probability, where

\begin{equation*}\rho _{-} = \frac{1}{2} - \sqrt {(\tfrac {1}{2}-\gamma )^2-\beta (1-2\gamma )} \gt \gamma .\end{equation*}

Remark 4. Observe that the size of the largest component in a finite random graph is bounded from below by the maximum over all degrees. In scale-free graphs this is of polynomial order in the graph size. It is shown in [Reference Janson7] that this lower bound is sharp for configuration models and inhomogeneous random graphs with a kernel of rank one. In our model the largest degree is $n^{\gamma +o(1)}$ , whereas the largest component has size $n^{\rho _-+o(1)}$ and is therefore much larger. A lower bound on the largest component larger than the maximal degree has also been found for a different preferential attachment model in [Reference Ray14], see also [Reference Banerjee, Bhamidi, van der Hofstad and Ray4] for recent further developments. As this effect is due to the self-similar nature of the graphs of preferential attachment type we conjecture that it is a universal feature of preferential attachment graphs that if the largest degree is $n^{\gamma (\beta )+o(1)}$ the largest subcritical component is of size $n^{\rho (\beta )+o(1)}$ for some $\rho (\beta )\gt \gamma (\beta )$ with $\rho (\beta )\to \frac 12$ as $\beta \uparrow \beta _c$ .

The remainder of the paper is organised as follows. We will not give the full proof of Theorem1 here as the argument is described in the extended abstract [Reference Mörters, Schleicher, Mailler and Wild12]. We will however give a completely self-contained proof of Theorem2 and therefore include most arguments that are needed for the proof of Theorem1. This proof of Theorem2 will be given in Section 2. Note that Theorem2 also establishes the lower bound in Theorem3 and in Section 3, we complete the proof of Theorem3 by providing an upper bound.

2. Proof of Theorem 2

For the proof of Theorem2 we embed a Galton-Watson tree into our graph. To explain the idea fix small parameters $0\lt u,b \lt 1$ . Let $m=u^{k-1}n$ for some positive integer $k$ (in the following, to avoid cluttering notation, we do not make the rounding of $m$ to an integer explicit). We explore the neighbourhood of a vertex $o_n$ with $bum \leq o_n\leq um$ in the graph $\mathscr{G}_{m}$ . We will see below that this exploration can be coupled to a branching random walk killed upon leaving a bounded interval such that with high probability the number of particles near the right interval boundary exceeds the number $S_m(o_n)$ of vertices in the connected component of $o_n\in \mathscr{G}_m$ with index at least $bm$ . These vertices will be the offspring of the vertex $o_n$ in our Galton-Watson tree. Before describing this coupling in detail we give a lower bound on the number of particles in the killed branching random walk. This result, formulated as Proposition 5, may be of independent interest.

We denote by $\mathscr{V}$ the tree of Ulam-Harris labels, i.e. all finite sequences of natural numbers including the empty sequence $\varnothing$ , which denotes the root. Given a label $v=(v_1,\ldots , v_n)\in \mathscr{V}\setminus \{\varnothing \}$ we denote by $|v|=n$ its length, corresponding to the generation of vertex $v$ in the tree and by ${v}=(v_1,\ldots , v_{n-1})$ the parent of $v$ in the tree. We attach to every label $v\in \mathscr{V}$ an independent sample $P_v$ of a point process with infinitely many points $P_v(1)\leq P_v(2) \leq P_v(3) \leq \ldots$ in increasing order on the real line, in our case the Poisson process with intensity measure

\begin{equation*}\pi (dx)=\beta (e^{\gamma x} {\unicode{x1D7D9}}_{x\gt 0} + e^{(1-\gamma ) x} {\unicode{x1D7D9}}_{x\lt 0}) \, dx.\end{equation*}

We denote by

\begin{equation*}\mathscr{T}(x)= ( V(v) \colon v \in \mathscr{V})\end{equation*}

the branching random walk started in $x\in \mathbb{R}$ , which is characterised by the position

\begin{equation*}V(v)= x + \sum _{i=1}^{|v|} P_{(v_1,\ldots , v_{i-1})}(v_i)\end{equation*}

of the particle with label $v\in \mathscr{V}$ . When started in $\log u$ we denote the underlying probability and expectation by $\mathbb{P}_u, \mathbb{E}_u$ and denote the branching random walk by $\mathscr{T}$ . The Laplace transform of the branching random walk is given by

\begin{align*} \psi (t)= \mathbb{E}_1\bigg[\sum _{|v|=1}e^{-tV(v)}\bigg] = \frac {\beta }{t-\gamma }+\frac {\beta }{1-\gamma -t} \quad \text{ if $\gamma \lt t\lt 1-\gamma $,} \end{align*}

and $\psi (t)=\infty$ otherwise. The domain of $\psi$ is nonempty if $\gamma \lt \frac 12$ and there exists $t\gt 0$ with $\psi (t)\lt 1$ if and only if $0\lt \beta \lt \frac {1}{4}-\frac {\gamma }{2}$ , i.e. in the subcritical regime for the inhomogeneous random graph. Under this assumption there exist $\rho _- \lt \rho _+$ with $\psi (\rho _-)=\psi (\rho _+)=1$ . We can calculate both values explicitly,

\begin{equation*} \rho _{\pm } = \frac{1}{2} \pm \sqrt{\big(\gamma -\tfrac {1}{2}\big)^2+\beta (2\gamma -1)}. \end{equation*}

For $0\leq a\lt d$ denote by $\mathscr{T}_{a,d}(x)$ the killed branching random walk starting with a particle in location $x$ , where all particles located outside the interval $(\!\log a, \log d]$ are killed together with their descendants. Again we omit the starting point from the notation if it is clear from the context. Note that $v=(v_1,\ldots ,v_n)\in \mathscr{T}_{a,d}$ means that, for all $0\leq i \leq n$ ,

\begin{equation*} \log a \lt V(v_1,\ldots ,v_i) \leq \log d.\end{equation*}

Of particular interest is $\mathscr{T}_{0,1}$ where particles with positions on the positive half-line are killed. The condition $\gamma \lt \frac {1}{2}$ and $\beta \lt \frac {1}{4}-\frac {\gamma }{2}$ is necessary and sufficient for $\mathscr{T}_{0,1}(x)$ started at $x\leq 0$ to suffer extinction in finite time almost surely, see [[Reference Shi15], Theorem 1.3].

For $0\leq a \leq b\lt 1$ denote by $I(a,b)$ be the total number of surviving particles of $\mathscr{T}_{a,1}$ located in $(\log b,0]$ . We prove a limit theorem for $I(0,b)$ under $\mathbb{P}_u$ when $u \downarrow 0$ .

Proposition 5. For every fixed $0\leq b\lt 1$ the random variable $I(0,b)$ satisfies

\begin{equation*}\lim _{u \downarrow 0} \mathbb{P}_u \left ( I(0,b) \geq xu^{-\rho _-} \right ) = \mathbb{P} \left(Y\geq x \right)\!,\end{equation*}

where $Y$ is a positive random variable satisfying

(1) \begin{equation} \mathbb{P}\left ( Y \geq x \right ) = x^{-(\rho _+/\rho _-) + o(1)} \text{ as $x\to \infty $.} \end{equation}

Proposition 5 will be proved in Section 2.1.

This proposition will play a crucial role when we construct the simultaneous coupling of the neighbourhoods of vertices in $\mathscr{G}_m$ . We use the projection

(2) \begin{align} \pi _m \colon & (\!-\infty ,0] \to \{1,\ldots ,m\}, \end{align}

defined by

\begin{equation*}-\sum _{k=\pi _m(x)}^m \frac 1k \lt x \leq -\sum _{k=\pi _m(x)+1}^m \frac 1k\end{equation*}

to map locations on the negative half-line to vertex numbers in $\mathscr{G}_m$ . Its partial inverse is

\begin{align*} \phi _m \colon & \{1, \ldots , m\} \rightarrow (-\infty ,0], \quad i \mapsto - \sum _{j=i+1}^{m} \frac {1}{j} \; . \end{align*}

For any set $\mathcal{U}\subset \{1,\ldots , m\}$ we denote by $\mathscr{F}_{\mathcal{U}}$ the $\sigma$ -algebra generated by the restriction of the random graph $\mathscr{G}_m$ to the vertex set $\mathcal{U}$ . Let $\gamma \lt \rho \lt \rho _-$ .

Proposition 6. For every $0\lt b\lt 1$ there exist $\varepsilon \gt 0$ , $a\gt 1$ and $0\lt u_0\lt b$ with the property that for every $0\lt u\lt u_0$ there exists $m(u)$ such that, for all $m\ge m(u)$ and any set $\mathcal{U}'\subset \{1,\ldots , um\}$ with $|\mathcal{U}'|\leq a m^{\rho }$ and family of $d\leq m^{\rho }$ vertices in $\mathcal{U}'$ with

\begin{equation*}bum\lt u_1 \lt \cdots \lt u_d \leq um, \end{equation*}

there exist

  • a set $\mathcal{U}'\subset \mathcal{U}\subset \{1,\ldots , m\}$ with $|\mathcal{U}|\leq a(m/u)^{\rho }$ ,

  • conditionally given $\mathscr{F}_{\mathcal{U}'}$ independent random variables $X_1, X_2 , \ldots , X_{d}$ with

    \begin{equation*} X_i = \begin{cases} \lceil \varepsilon u^{-\rho }\rceil , & \text{with probability } \varepsilon \gt 0 ,\\ 0, & \text{with probability } 1-\varepsilon ,\\ \end{cases} \end{equation*}
  • pairwise disjoint subsets $\mathcal{X}_1, \ldots , \mathcal{X}_d\subset \mathcal{U} \cap \{bm,\ldots ,m\}$ with $|\mathcal{X}_i|=X_i$ such that $\mathcal{X}_i$ is contained in the connected component of $u_i$ in $\mathcal{U}$ .

Proposition 6 will be proved in Section 2.2 using Proposition 5.

We now complete the proof of Theorem2 using Proposition 6. Take $o_n\in \mathscr{G}_n$ so that

\begin{equation*} {o_n\to \infty } \text{ and } { \frac {o_n}{n} \to 0}.\end{equation*}

We fix $\delta \gt 0$ , $b=\frac 12$ , then $\varepsilon \gt 0$ from Proposition 6 and $0\lt u\lt u_0$ so that $\frac {2\log \varepsilon }{\log u}\lt \frac {\delta }2$ and also that $\varepsilon ^2\gt u^{\rho }$ . Let

\begin{equation*}k=\frac {\log (o_n/n)}{\log u}-1.\end{equation*}

Then $o_n=u^{k+1}n$ and we set $m\,:\!=\,u^{k}n$ . Take $n$ large enough such that $m \geq m(u)$ as defined in Proposition 6. This is possible since $m=o_n/u \rightarrow \infty$ as $n\to \infty$ .

In the first step we use Proposition 6 with $d=1$ and $u_1=o_n$ . We obtain $X_1$ vertices with index $\ge bm$ in the component $S_m(o_n)$ . These vertices constitute the children of the root and therefore the first generation of the embedded Galton Watson tree. Their indices lie in the interval $(bu^{k}n, u^{k}n]$ . In the second step we take these vertices and the set $\mathcal{U}$ from the first step as input into Proposition 6 which we now use with a new, larger $m\,:\!=\,u^{k-1}n$ , see Figure 1 for an illustration. Note that $d\leq m^{\rho }$ and the conditions of Proposition 6 are satisfied so that we get a second generation consisting of disjoint subsets $\mathcal{X}_1, \ldots \mathcal{X}_d$ of the connected component of $o_n$ in $\mathscr{G}_m$ . These are the offspring of the $d$ children of the root. We continue this procedure for altogether $k$ steps until, in the last step, we reach $m=n$ . The number of vertices thus created in the component of $o_n\in \mathscr{G}_n$ is the total size of the first $k$ generations of a Galton-Watson tree with offspring variable $X_i$ .

Figure 1. Illustration of Proposition 6: The vertices $u_1, \ldots , u_4$ are successively explored, the exploration of $u_1$ is depicted. The exploration yields particles in the entire interval $[bum,m]$ but only the red particles located in $[bm,m]$ are included in $\mathcal{X}_1$ . A logarithmic scale is used on the abscissa.

As the mean offspring number is

\begin{equation*}\mathbb{E}[X_i] =\varepsilon \lceil \varepsilon u^{-\rho }\rceil \gt 1,\end{equation*}

the Galton-Watson tree is supercritical and survives forever with positive probability. As

\begin{equation*}k \sim - \frac {\log n}{\log u} \to \infty ,\end{equation*}

on survival the number of vertices in the $k$ th generation is a positive multiple of

\begin{align*} (\varepsilon \lceil \varepsilon u^{-\rho }\rceil )^k&= \left\{\exp { -(1+o(1)) \frac {\log n}{\log u}(2\log \varepsilon -\rho \log u)}\right\} \\ &= \left\{\exp{-(1+o(1)) \log n \bigg(\frac {2\log \varepsilon }{\log u} - \rho \bigg)}\right\} \geq n^{\rho -\delta }, \end{align*}

for all large $n$ . In particular we have

\begin{equation*}S_n(o_n)\ge cn^{\rho -\delta } \quad \text{ for all $n$ with $positive$ probability.}\end{equation*}

To get the result with high probability we need to modify the first step of the construction and start the Galton-Watson tree not with one but with a large but fixed number $d$ of vertices. We fix $0\lt b\lt 1$ to be determined later and now let

\begin{equation*}k=\frac {\log (o_n/bn)}{\log u}-1\end{equation*}

and note that $o_n=bum$ when we again set $m\,:\!=\,u^{k}n$ . The difference between the degree of $o_n$ at times $um$ and $bum$ is the sum of $(1-b)um$ independent Bernoulli random variables with parameter bounded from below by $\beta (um)^{\gamma -1}(bum)^{-\gamma }$ . As $n\to \infty$ this random variable converges to a Poisson random variable with parameter $\beta (1-b)b^{-\gamma }$ . We can therefore make the probability that this random variable is larger than $d$ arbitrarily close to one by picking a sufficiently small $b$ in our applications of Proposition 6. On this event we can now start the construction with $d$ vertices which are all children of the original $o_n$ and get $d$ independent supercritical Galton-Watson trees with the given offspring distribution. Let $q \in (0,1)$ denote the extinction probability of this Galton-Watson tree. The probability that at least one of the $d$ trees survives is $1-q^d$ , which can be made arbitrarily small by choice of $d$ . On this event we get the requested lower bound on $S_n(o_n)$ . This completes the proof.

2.1 Proof of Proposition 5

The idea of the proof is to exploit that, as $\psi (\rho _-)=1$ , the process given by

\begin{equation*}W_n\,:\!=\,\sum _{|v|=n}e^{-\rho _-V(v)}\end{equation*}

is a martingale. Since $W_n$ is nonnegative it converges to some limit $W$ , which we show to be strictly positive. We then look at this martingale from the point of view of a stopping line, as discussed in [Reference Kyprianou8]. Theorem 9 in [Reference Kyprianou8] implies convergence as $t\to \infty$ of $(e^{-\rho _-t} Z_t')$ to $W$ , where

(3) \begin{equation} Z_t'\,:\!=\, \sum _{v \in \mathscr{V}} {\unicode{x1D7D9}}_{\{V(v)\lt t\}} \sum _{y \colon \boldsymbol{y}=v} e^{-\rho _-(V(y)-t)} {\unicode{x1D7D9}}_{\{V(y)\geq t\}}. \end{equation}

Observe that conditional on the $v$ with $V(v)\lt t$ the inner sums are independent with a distribution depending continuously on $V(v)-t$ . A result of Nerman [[Reference Nerman13], Theorem 3.1] therefore gives that the inner sum can be replaced by ${\unicode{x1D7D9}}_{\{t-V(v)\leq \log b\}}$ and we still get convergence to a constant multiple of $W$ .

We start the detailed proof by verifying that the limiting $W$ is strictly positive and satisfies the tail property of (1). By Biggins’ theorem for branching random walks, see e.g. [Reference Biggins2, Reference Lyons, Athreya and Jagers10], the martingale limit $W$ is strictly positive if and only if the following two conditions hold,

  1. (i) $\psi (\rho _-)-\frac {\rho _-\psi '(\rho _-)}{\psi (\rho _-)}\gt 0 \, ,$

  2. (ii) $\mathbb{E}_1[W_1 \log W_1]\lt \infty .$

The first one holds as $\psi (\rho _-)=1$ and $\psi '(\rho _-)\lt 0$ . For the second condition it suffices to prove the following lemma.

Lemma 7. For $1\lt p\lt \frac {1-\gamma }{\rho _-}$ we have $\mathbb{E}_1 \big [W_1^p\big ] \lt \infty .$

Proof. We define

\begin{equation*}f(x,\Pi )= e^{-\rho _- V(x)} \Big(\sum _{y\in \Pi }e^{-\rho _- V(y)}\Big)^{p-1}\, .\end{equation*}

Then $\mathbb{E}_1[W_1^p]=\mathbb{E}[\!\int f(x,\Pi ) \, \Pi (dx)]$ and by Mecke’s equation [[Reference Last and Penrose9], Theorem 4.1] we get

\begin{align*} \mathbb{E}_1[W_1^p]&= \int \mathbb{E}[f(x,\Pi +\delta _x)] \, \pi (dx) = \int e^{-\rho _-x}\mathbb{E}\Big [\big (e^{-\rho _-x}+\int e^{-\rho _- t} \, \Pi (dt) \big )^{p-1}\Big ] \pi (dx)\\ &\leq 2^{p-1} \Big (\int e^{-p \rho _-x} \pi (dx) + \mathbb{E}_1\big [ W_1^{p-1}\big ] \, \psi (\rho _-) \Big )\, . \end{align*}

The left summand is equal to $\psi (p\rho _-)$ which is finite for $1\lt p\lt \frac {1-\gamma }{\rho _-}$ . The right summand is finite if $1\lt p\leq 2$ because in this case, by Jensen’s inequality, the expectation is bounded by one. If $p\gt 2$ we iterate the argument, using the same bound but now with $1\lt p-1\lt \frac {1-\gamma }{\rho _-}$ . In each iteration the exponent is reduced by one until it is no larger than two.

Biggins’ theorem ensures not only that $W\gt 0$ on survival of $\mathscr{T}\,$ but also that $W_n\to W$ in $L^1$ . By the next lemma we can improve this to convergence in $L^p$ for $p\lt \rho _+/\rho _-$ .

Lemma 8. For $1\lt p\lt \rho _+/\rho _-$ we have that $\displaystyle \sup _{n\in \mathbb{N}} \mathbb{E}_1 \big [W_n^p\big ] \lt \infty$ and $W_n\to W$ in $L^p$ .

Proof. By Proposition 2.1 in [Reference Iksanov, Liang and Liu6] we get that $(W_n)$ converges in $L^p$ and that $\mathbb{E}_1[W_n^p]$ is bounded if

\begin{equation*} \mathbb{E}_1 \big [W_1^p\big ] \lt \infty \text{ and } \psi (p\rho _-) \lt \psi (\rho _-)^p.\end{equation*}

The first condition is verified under the weaker condition $1\lt p\lt \frac {1-\gamma }{\rho _-}$ in Lemma 7. As $\psi (\rho _-)=1$ the second condition becomes $\psi (p\rho _-) \lt 1$ , which holds if $p\lt \rho _+/\rho _-$ .

The tail behaviour of $W$ (and later of $Y$ ) claimed in Proposition 5 now follows directly from Lemma 8 by Markov’s inequality.

As in [[Reference Mörters, Schleicher, Mailler and Wild12], Proposition 8] our next aim is to rewrite $\mathscr{T}_{0,1/u}$ started at the origin in terms of a sum over characteristics of the individuals in the population at time $t=-\log u$ of a general (Crump-Mode-Jagers) branching process. In a general branching process the location of all offspring is to the right of the parent and locations are interpreted as birth-times of offspring particles.

Figure 2. Branching particles are marked in blue. The positions on $[0,\infty )$ of the frozen particles, which are marked in red, yield the point process $\xi$ .

To this end we divide the offspring of a particle $v=(v_1,\ldots , v_n)$ at location $V(v)$ into branching particles to its left, and frozen particles to its right. The offspring of the branching particles is again divided into branching particles to the left of $V(v)$ and frozen particles to its right, until (after a finite number of steps) the offspring of all branching particles has been divided into branching and frozen particles. The frozen particles are all located to the right of $V(v)$ , they constitute the offspring process of $v$ in the general branching process. Their relative positions form a point process

\begin{equation*}\xi _v=\sum _{{w\in \mathscr{V}, |w|\gt n}\atop {(w_1,\ldots , w_n) = (v_1,\ldots , v_n)}} \delta _{V(w)-V(v)} {\unicode{x1D7D9}}_{\{V(w)\gt V(v) , V(w_1,\ldots ,w_i)\leq V(v) \forall n\leq i\lt |w|\}},\end{equation*}

and they are all copies of the point process $\xi$ depicted in Figure 2. The branching particles form a set $\mathscr{B}_v$ and their locations are all to the left of $V(v)$ .

To construct the general branching process we start with the root located at the origin, considered initially to be frozen, take the point process $\xi _\varnothing$ of frozen particles as birth times of the children of the root and apply the same procedure to every child $v$ of the root. The processes $\xi _v$ and cardinalities $|\mathscr{B}_v|$ are independent and identically distributed over all the frozen particles $v$ . The total number of particles in $\mathscr{T}_{0,1/u}$ equals

\begin{equation*}\sum _{{v \text{ frozen}}\atop {V(v) \leq t}} (1+ |\mathscr{B}_v|),\end{equation*}

where $t=-\log u$ . To obtain convergence of this quantity (properly scaled) we need to find the Malthusian parameter $\alpha \gt 0$ associated to $\xi$ , defined by

\begin{equation*}\mathbb{E} \int _0^\infty e^{-\alpha t} \xi (dt)=1. \end{equation*}

We now show that $\rho _-$ is the Malthusian parameter associated to $\xi$ . To this end we construct a martingale $(M_n)$ as follows: We start with a particle at the origin and $M_0=1$ . In every step, we replace the leftmost particle by its offspring chosen with displacements according to a Poisson process of intensity $\pi$ and leave all other particles alive. Particles in $(0,\infty )$ never branch and remain alive but frozen. If the leftmost particle is in $(0,\infty )$ the process stops and the positions of the frozen particles make up $\xi$ . The random variable $M_n$ is obtained as the sum of all particles $x$ alive after the $n$ th step weighted with $e^{-\rho _- V(x)}$ . Because $\psi (\rho _-)=1$ the process $(M_n)$ is indeed a martingale, and it clearly converges almost surely to

\begin{equation*}M_\infty =\int _0^\infty \mathrm{e}^{-\rho _- t} \xi (\mathrm{d}{t}).\end{equation*}

Now take $\alpha \gt \rho _-$ with $\psi (\alpha )\lt 1$ . Then $M_n$ is dominated by

\begin{align*} M_n \leq \sum _{u \text{ branching}} e^{- \alpha V(u)} + \sum _{u \text{ frozen}}e^{-\rho _-V(u)} \end{align*}

The right-hand side is integrable, as the sum over frozen particles born from a single particle $x$ in position $V(x)\lt 0$ has expectation at most $e^{-\alpha V(x)}$ and the expected sum over these bounds for all branching particles is itself bounded by $\frac 1{1-\psi (\alpha )}$ . By dominated convergence, we get that $\mathbb{E}[M_\infty ]=1$ and hence $\rho _-$ is the Malthusian parameter.

Theorem 3.1 in Nerman [Reference Nerman13] yields convergence of $(e^{-\rho _- t} Z_t^\phi )$ to a positive random variable $m_\phi Z$ for

\begin{align*} Z_t^\phi \,:\!=\,\sum _{v\colon V(v) \leq t}\phi _v(t-V(v)), \end{align*}

where the sum is over the particles of the general branching process born before time $t$ and the characteristics $\phi _v$ are independent, identically distributed copies of a random function $\phi \colon [0,\infty ) \to [0,\infty )$ satisfying mild technical conditions. Moreover, $Z$ is a positive random variable independent of $\phi$ and $m_\phi$ a positive constant depending on $\phi$ . The conditions of [Reference Nerman13] are satisfied for the process $(Z_t')$ in (3) by [[Reference Nerman13], Corollary 2.5], whence $Z$ is a constant multiple of $W$ , but also when the processes $(\phi (s) \colon s\ge 0)$ are bounded by an integrable random variable and $\mathbb{E}\phi \colon [0,\infty ) \to [0,\infty )$ is continuous.

We now look at the total number $I(0,b)$ of surviving particles of $\mathscr{T}_{0,1}(\log u)$ located in $(\log b,0]$ . We shift all particle positions by $t = -\log u$ . Then the killed branching random walk $ \mathscr{T}_{0,1}(\log u)$ becomes a killed branching random walk $\mathscr{T}_{0,1/u}(0)$ and $I(0,b)$ the number of surviving particles in $(t+\log b,t]$ .

We have $I(0,b)=Z_t^\phi$ for the general branching process with offspring law $\xi$ at the time $t = -\log u$ and for the characteristic

\begin{equation*}\phi _v(s)= \sum _{w \in \mathscr{B}_v} {\unicode{x1D7D9}}_{\{s+\log b\lt V_v(w) \leq s\}},\end{equation*}

where $V_v(w)$ is the relative position of the branching particle $w$ to $v$ . Then $\phi _v(t-V(v))$ is the number of branching particles descending from $v$ (including $v$ itself) located in the interval $(t+\log b,t]$ . This process is dominated by $1+|\mathscr{B}_v|$ . To check that $|\mathscr{B}_\varnothing |$ is integrable, fix $\alpha \gt 0$ with $\psi (\alpha )\lt 1$ . Then we have for $v\in \mathscr{B}_\varnothing$ that $e^{-\alpha V(v)} \geq 1$ and

\begin{equation*}\mathbb{E} \sum _{{v\in \mathscr{B}_\varnothing }\atop {|v|=n}} e^{-\alpha V(v)} \leq \psi (\alpha )^n.\end{equation*}

Hence $\mathbb{E}[|\mathscr{B}_v|]\leq \sum _n \psi (\alpha )^n \leq \frac {1}{1-\psi (\alpha )} \lt \infty .$ As $\mathbb{E} \phi _v$ is clearly continuous the conditions of [[Reference Nerman13], Theorem 3.1] on the characteristics are satisfied.

Altogether this yields that

\begin{equation*}\lim _{u\downarrow 0} u^{\rho _-}I(0,b)= \lim _{t\uparrow \infty } e^{-\rho _- t} Z_t^\phi = Y \quad \text{ in distribution,}\end{equation*}

where the limit $Y$ is a positive, constant multiple of the positive martingale limit $W$ .

2.2 Proof of Proposition 6

Under the assumption $0\lt \beta \lt \frac {1}{4}-\frac {\gamma }{2}$ the leftmost particle of $\mathscr{T}$ drifts to the right, i.e. $\lim _{n\to \infty } \inf _{|v|=n} V(v)=\infty$ , see [[Reference Shi15], Lemma 3.1]. Hence $\inf _{v\in \mathscr{V}} V(v)$ is a finite random variable and it is easy to see that in our case its support is the entire negative half-line. Hence, given $0\lt b\lt 1$ , we can pick $\varepsilon \gt 0$ such that

\begin{equation*}\mathbb{P}_1\big( \inf _{v\in \mathscr{V}} V(v) \gt \log b\big) \geq \varepsilon .\end{equation*}

Additionally we request that, for $Y$ as in Proposition 5, $\varepsilon \gt 0$ satisfies $\mathbb{P} \left ( Y\geq \varepsilon \right ) \geq 5\varepsilon .$ This implies that

\begin{equation*}\liminf _{u \downarrow 0} \inf _{u'\in [ub,u]} \mathbb{P}_{u'} \left ( I(ub,b) \geq \varepsilon u^{-\rho _-} \right ) \geq \liminf _{u \downarrow 0} \mathbb{P}_u \left ( I(0,b) \geq \varepsilon u^{-\rho _-} \right ) - \varepsilon \geq 4 \varepsilon \end{equation*}

and, for suitably large $a\gt 1$ ,

\begin{equation*}\limsup _{u \downarrow 0} \sup _{u'\in [ub,u]} \mathbb{P}_{u'} ( I(ub,0) \geq a u^{-\rho _-}) \leq \limsup _{u \downarrow 0} \mathbb{P}_{ub} ( I(0,0) \geq a u^{-\rho _-}) \leq \frac{\varepsilon}{2}.\end{equation*}

We pick $0\lt u_0\lt b \wedge 2^{-1/\rho _-}$ such that $\inf _{u'\in [ub,u]} \mathbb{P}_{u'} ( I(ub,b) \geq \varepsilon u^{-\rho _-}) \geq 3\varepsilon$ and also $a\gt 1$ such that $\sup _{u'\in [ub,u]} \mathbb{P}_{u'} ( I(ub,0) \geq \frac {a}2 u^{-\rho _-}) \leq \varepsilon$ , for all $0\lt u\lt u_0$ . The exploration algorithm below uses $\varepsilon , a, \rho _-$ and $u_0$ as derived above from the parameter $\pi$ .

We present, for parameters $(\pi , u, m)$ with $m\in \mathbb{N}$ , an exploration algorithm with input

  • a graph $\mathscr{U}'\subset \{1,\ldots , um\}$ with at most $a m^{\rho _-}$ vertices,

  • distinct vertices $u_1 \lt \ldots \lt u_d$ in $\mathscr{U}'$ with $bum\lt u_i\leq um$ and $d\leq m^{\rho _-}$ .

The output of the algorithm are

  • a family of pairwise disjoint sets $\mathcal{Y}_1, \ldots , \mathcal{Y}_d \subset \{bm,\ldots ,m\}$ ,

  • a graph $\mathscr{U} \subset \{1,\ldots , m\}$ with at most $a (\frac {m}u)^{\rho _-}$ vertices such that $\mathscr{U}'$ is an embedded subgraph and the sets $\mathcal{Y}_i$ are contained in the connected component of $u_i$ in $\mathscr{U}$ .

By construction the output sets $\mathcal{Y}_1, \ldots , \mathcal{Y}_d$ are pairwise disjoint and $u_i$ is connected to $\mathcal{Y}_i$ by edges in $\mathscr{U}$ . Also, for every $i\in \{1,\ldots ,d\}$ the algorithm adds at most $\frac {a}2u^{-\rho _-}+1$ vertices to the graph $\mathscr{U}$ , so that its output $\mathscr{U}$ satisfies

\begin{align*} |\mathscr{U}| & \leq |\mathscr{U}'|+ d\bigg(\frac{a}{2}u^{-\rho _-}+1\bigg) \leq a (m/u)^{\rho _-} \bigg ( u^{\rho _-} +\frac{1}{2} + \frac{1}{a} u^{\rho _-}\bigg ) \leq a (m/u)^{\rho _-}, \end{align*}

for all $0\lt u\lt u_0$ by choice of $u_0$ . In the following we show how the algorithm can be used to construct a suitably large subgraph of $\mathscr{G}_m$ .

We run the algorithm with parameter $(\tilde \pi ,u,m)$ for an intensity measure with a slightly decreased density parameter $0\lt \tilde \beta \lt \beta$ , $0\lt u\lt u_0$ and some large $m$ . This leads to a slightly smaller value of $\rho _-$ which is referred to as $\rho$ in the statement of Proposition 6. The next lemma shows that the probability of edges inserted by the algorithm is bounded from above by the edge probabilities in $\mathscr{G}_m$ .

Lemma 9. There exists $m(u)\in \mathbb{N}$ such that, for all $m\ge m(u)$ , for all $m \ge s,r \geq bum$ with $s \not =r$ the probability that a particle $v$ in location $V(v)$ with $\pi _m(V(v))=r$ has an offspring $y$ with location $V(y)$ satisfying $\pi _m(V(y))=s$ is at most

\begin{equation*}\beta (r\wedge s)^{-\gamma } (r\vee s)^{\gamma -1}.\end{equation*}

Proof. For a fixed particle $v$ in location $V(v)$ with $\pi _m(V(v))=r$ the probability that it has an offspring $y$ with location $V(y)$ satisfying $\pi _m(V(y))=s$ equals

(4) \begin{equation} 1-\exp \Bigg(\!-\tilde \pi \Bigg( -\sum _{k=s}^m \frac 1k -V(v), -\sum _{k=s+1}^m \frac 1k-V(v)\Bigg]\Bigg). \end{equation}

As $\pi _m(V(v))=r$ we have

\begin{equation*}-\sum _{k=r}^m \frac 1k \lt V(v) \leq -\sum _{k=r+1}^m \frac 1k.\end{equation*}

The probability in (4) is therefore largest when $V(v)=-\sum _{k=r}^m \frac 1k$ . It therefore remains to show that, for $bum\leq s\lt r$ , we have

(5) \begin{equation} 1-\exp \Bigg (-\tilde \pi \Bigg( -\sum _{k=s}^{r-1} \frac 1k, -\sum _{k=s+1}^{r-1}\frac 1k\Bigg]\Bigg ) \leq \beta s^{-\gamma }r^{\gamma -1}, \end{equation}

and, for $bum\leq r\lt s$ , we have

(6) \begin{equation} 1-\exp \Bigg (-\tilde \pi \Bigg(\sum _{k=r}^{s-1} \frac 1k, \sum _{k=r}^{s} \frac 1k\Bigg]\Bigg) \leq \beta s^{\gamma -1}r^{-\gamma }. \end{equation}

For (5) we find that, for some constant $C\gt 0$ , if $m\geq m(u)$ for a suitable $m(u)\in \mathbb{N}$ ,

\begin{align*} \tilde \pi \Bigg( -\sum _{k=s}^{r-1} \frac 1k, -\sum _{k=s+1}^{r-1}\frac 1k\Bigg] & = \frac {\tilde \beta }{1-\gamma }\, \exp \Bigg({-(1-\gamma )\sum _{k=s}^{r-1} \frac 1k} \Bigg)(e^{\frac {1-\gamma }{s+1}}-1)\\ & \leq \bigg (\frac {\tilde \beta }{s+1}+ \frac {C}{(bum)^2}\bigg )\, \exp \big (-(1-\gamma ) (\log (\tfrac {r-1}{s-1}) - \tfrac {C}{bum}) \big )\\[2mm] & \leq \beta s^{-\gamma }r^{\gamma -1}. \end{align*}

Hence, using that $1-e^{-x}\le x$ , we get (5).

For (6) we find that, for some constant $C\gt 0$ , if $m\geq m(u)$ for a suitable $m(u)\in \mathbb{N}$ ,

\begin{align*} \tilde \pi \Bigg( \sum _{k=r}^{s-1} \frac 1k, \sum _{k=r}^{s}\frac 1k\Bigg] & = \frac {\tilde \beta }{\gamma }\exp \bigg({\gamma \sum _{k=r}^{s-1} \frac 1k} \bigg)(e^{\frac {\gamma }{s}-1})\\ & \leq \Bigg(\frac {\tilde \beta }{s}+\frac {C}{(bum)^2}\Bigg)\, \exp \Big(\gamma (\log \Big(\frac {s-1}{r-1}\Big) - \tfrac {C}{bum}) \Big)\\[2mm] & \leq \beta s^{\gamma -1}r^{-\gamma }. \end{align*}

Hence, using that $1-e^{-x}\le x$ , we get (6).

Let $E_i$ be the event that the exploration of $u_i$ was successful. This is the case if $\mathcal{Y}_i\not =\emptyset$ or, equivalently, $|\mathcal{Y}_i|\geq \varepsilon u^{-\rho _-}$ . Let $\mathscr{U}_i$ be the graph in Algorithm1 at the time when the exploration of $u_i$ is completed and $(\mathscr{F}_{\mathscr{U}_i} \colon i=0,\ldots ,d)$ the natural filtration associated with this process. Note that

\begin{equation*}\mathscr{U}'\,=:\,\mathscr{U}_0 \subset \mathscr{U}_1 \subset \ldots \subset \mathscr{U}_d=\mathscr{U},\end{equation*}

and that $E_i\in \mathscr{F}_{\mathscr{U}_i}$ for all $i\in \{1,\ldots ,d\}$ .

Lemma 10. For $0\lt u \lt u_0$ there exists $m(u)\in \mathbb{N}$ such that, for all $m \geq m(u)$ , almost surely,

\begin{equation*}\mathbb{P}(E_{i+1} \mid \mathscr{F}_{\mathscr{U}_i}) \ge \varepsilon .\end{equation*}

Algorithm 1 Branching Random Walk Exploration (π,u,m)

Proof. Let $i \in \{0,\dots ,d-1\}$ . Working on $\mathscr{F}_{\mathscr{U}_i}$ we know the graph $\mathscr{U}_{i}$ and the algorithm explores the branching random walk $\mathscr{T}_{ub,1}(\phi _m(u_{i+1}))$ . We have to control the probability that the algorithm stops without $E_i$ . This can happen on three different occasions:

  • Line 6: For an explored particle $v$ we have that $\pi _m(V(v)) \in \mathscr{U}_i$ .

    Since $\pi _m(V(v)) \in (bum,m]$ we can use Lemma 9, and find $m(u)\in \mathbb{N}$ such that for all $m \geq m(u)$ we can upper bound the probability that $V(v)$ is in a region that gets projected to a fixed vertex $j \in \mathscr{U}_i$ by

    \begin{equation*} \beta \left ( (\pi _m(V(v)) \wedge j)^{-\gamma } (\pi _m(V(v)) \vee j)^{\gamma -1} \right ) \leq \frac {\beta }{bum}.\end{equation*}
    There are at most $|\mathscr{U}_i|\leq a u^{-\rho _-}m^{\rho _-}$ such vertices. Therefore we get
    \begin{align*} \mathbb{P}( \pi _m(V(v)) \in \mathscr{U}_i) \leq \frac {\beta |\mathscr{U}_i|}{bum} = a\beta b^{-1} u^{-\rho _-{-}1}m^{\rho _-{-}1} \, . \end{align*}
    Due to the condition in line 11, there are at most $\frac {a}{2}u^{-\rho _-}+1$ exploration steps where we have to account this error before the algorithm stops. Hence we can bound the probability in the complete exploration of the tree (the complete for-loop) by
    \begin{align*} \Big (\frac{a}{2}u^{-\rho _-}+1\Big )u^{-\rho _- -1} a \beta b^{-1} m^{\rho _-{-}1}. \end{align*}
    Increase $m(u)$ if necessary so that for $m \geq m(u)$ this probability is bounded by $\varepsilon$ .
  • Line 11: During the exploration we find more than $\frac {a}{2}u^{-\rho _-}$ vertices.

    By choice of $a$ and $u_0$ we have

    \begin{equation*}\mathbb{P}\Big(|\mathcal{B}_i| \geq \frac{a}{2}u^{-\rho _-}\Big)= \mathbb{P}_{u_{i+1}} \Big( I(ub,0) \geq \frac{a}2 u^{-\rho _-}\Big) \leq \varepsilon \, .\end{equation*}
  • Line 21: We do not find at least $\varepsilon u^{-\rho _-}$ vertices that we can add to $\mathcal{Y}_i$ . This probability is bounded by $\mathbb{P}_{u_{i+1}} ( I(ub,b) \lt \varepsilon u^{-\rho _-}) \leq 1- 3\varepsilon$ .

Taking a union bound we get $\mathbb{P}(E_{i+1}^{\mathrm c} \mid \mathscr{F}_{\mathscr{U}_i})\leq 1-\varepsilon$ , as requested.

To complete the construction we have to remove the possible dependence of the size of the sets $\mathcal{Y}_1, \ldots , \mathcal{Y}_{d}$ by means of the following decoupling lemma.

Lemma 11. Let $\mathcal{Y}_1, \ldots , \mathcal{Y}_{d}$ be random sets such that, almost surely,

\begin{equation*}\mathbb{P}(|\mathcal{Y}_{i+1}|\ge k \mid \mathscr{F}_{\mathscr{U}_i}) \ge \epsilon ,\end{equation*}

then there exist random sets ${\mathcal{X}_i} \subset \mathcal{Y}_i$ with $X_i$ elements such that $X_1,\ldots X_d$ are independent and $\mathbb{P}(X_{i}= k) = \epsilon \text{ and } \mathbb{P}(X_{i}=0) = 1- \epsilon .$

Proof. Let $U_1,\ldots , U_d$ be independent and uniformly distributed on $(0,1)$ . Let $\mathscr{E}_1$ be the event that $\mathcal{Y}_1$ has at least $k$ elements and $U_1\leq \frac \epsilon {\mathbb{P}(|\mathcal{Y}_1|\ge k)}$ so that $\mathbb{P}(\mathscr{E}_1)=\epsilon$ . On the event $\mathscr{E}_1$ we draw $k$ elements from $\mathcal{Y}_1$ without replacement and put $\mathcal{X}_1$ to be the set of elements thus drawn. On $\mathscr{E}_0=\mathscr{E}_1^c$ we set ${\mathcal{X}_1}=\emptyset$ . Then $X_1$ has the desired distribution.

Now let $\mathscr{E}_{i1}$ be the event $\mathscr{E}_i$ intersected with the event that $\mathcal{Y}_2$ has at least $k$ elements and $U_2\leq \frac \epsilon {\mathbb{P}(|\mathcal{Y}_2|\ge k \mid \mathscr{E}_i)}$ . Then

\begin{equation*}\mathbb{P}(\mathscr{E}_{i1})= \mathbb{P}(\mathscr{E}_i) \mathbb{P}\big ( |\mathcal{Y}_2|\ge k, U_2\leq \tfrac \epsilon {\mathbb{P}(|\mathcal{Y}_2|\ge k \mid \mathscr{E}_i)}\mid \mathscr{E}_i\big ) = \epsilon \mathbb{P}(\mathscr{E}_i).\end{equation*}

On the event $\mathscr{E}_{i1}$ we draw $k$ elements from $\mathcal{Y}_2$ without replacement and put $\mathcal{X}_2$ to be the set of elements thus drawn. On $\mathscr{E}_{i0}=\mathscr{E}_i \setminus \mathscr{E}_{i1}$ we set ${\mathcal{X}_2}=\emptyset$ . Then

\begin{align*} \mathbb{P}(X_1=k, X_2=k) = \mathbb{P}( \mathscr{E}_{11}) = \epsilon ^2, \qquad & \mathbb{P}(X_1=0, X_2=k) = \mathbb{P}( \mathscr{E}_{01}) = (1-\epsilon )\epsilon , \\ \mathbb{P}(X_1=k, X_2=0) = \mathbb{P}( \mathscr{E}_{10}) = \epsilon (1-\epsilon ), \qquad & \mathbb{P}(X_1=0, X_2=0) = \mathbb{P}( \mathscr{E}_{00}) = (1-\epsilon )^2. \end{align*}

This implies that $X_2$ has the desired distribution and $X_1$ and $X_2$ are independent. We continue with this method until $\mathcal{X}_1 , \ldots , \mathcal{X}_{d}$ are constructed.

Proof of Proposition 6. To prove Proposition 6 we run Algorithm 1 with parameters $(\tilde \pi ,u,m)$ for the intensity measure

\begin{equation*}\tilde \pi (dx)=\tilde \beta (e^{\gamma x} {\unicode{x1D7D9}}_{x\gt 0} + e^{(1-\gamma ) x} {\unicode{x1D7D9}}_{x\lt 0}) \, dx,\end{equation*}

with a slightly decreased density parameter $0\lt \tilde \beta \lt \beta$ , $0\lt u\lt u_0$ and $m\ge m(u)$ . Input are the vertices $u_1,\ldots , u_d\in \mathcal{U}'$ and a graph $\mathscr{U}'$ distributed like the restriction of $\mathscr{G}_m$ to its vertex set $\mathcal{U}'$ . Lemma 9 ensures that the algorithm inserts an edge into $\mathscr{U}$ with a probability no larger than the edge probabilities in $\mathscr{G}_m$ . Hence the output graph $\mathscr{U}$ is stochastically dominated by the restriction of $\mathscr{G}_m$ to its vertex set, denoted $\mathcal{U}$ . We can therefore add vertices to $\mathscr{U}$ so that is distributed like the restriction of $\mathscr{G}_m$ to $\mathcal{U}$ .

By Lemma 10 the set $\mathcal{U}$ contains disjoint subsets $\mathcal{Y}_1, \ldots , \mathcal{Y}_d$ with

\begin{equation*}\mathbb{P}(|\mathcal{Y}_{i+1}|\geq \varepsilon u^{-\rho _-} \mid \mathscr{F}_{\mathscr{U}_i}) \ge \varepsilon ,\end{equation*}

and Lemma 11 gives the existence of random sets $\mathcal{X}_i \subset \mathcal{Y}_i$ with size $X_i=|\mathcal{X}_i|$ such that

\begin{equation*} X_i = {\begin{cases} \lceil \varepsilon u^{-\rho _-} \rceil , & \text{with probability } \varepsilon \gt 0, \\ 0, & \text{with probability } 1 - \varepsilon , \end{cases}} \end{equation*}

and $X_1,\ldots , X_d$ independent. By construction the connected component of $u_i$ in $\mathscr{U}$ contains $\mathcal{X}_i$ with $\mathcal{X}_i \cap \mathcal{X}_j = \emptyset$ for all $i \neq j$ . The construction can be completed by embedding $\mathscr{U}$ into $\mathscr{G}_m$ by adding vertices and independent edges.

3. Proof of Theorem3.

The lower bound follows directly from Theorem2. We need to provide a matching upper bound. Let $Z_k=| \{v \colon |\mathscr{C}_n(v)|\ge k\} |$ where $\mathscr{C}_n(v)$ is the connected component of $v$ in $\mathscr{G}_n$ . Then we have

(7) \begin{equation} \mathbb{P} \big (| S^{\text{max}}_n| \ge k\big ) = \mathbb{P} \big (Z_k \ge k\big ) \leq \frac 1k \mathbb{E} Z_k = \frac {n}k \mathbb{P} \big (| \mathscr{C}_n(O_n)| \ge k\big ), \end{equation}

where $O_n\in \{1,\ldots ,n\}$ is uniformly chosen. We complete the argument in two steps. First, we dominate the graph $\mathscr{C}_n(O_n)$ by a branching random walk $\mathscr{T}_{0,1}(-X)$ started in a random placed vertex $-X$ and killed upon leaving the negative half-axis. For this purpose we slightly increase the edge density parameter $\beta$ in the definition of $\mathscr{T}_{0,1}(-X)$ . The domination then holds unless the branching random walk visits a point located near (or to the left of) $-\log n$ . The following proposition will be proved in Section 3.1.

Proposition 12. For any $\beta \lt \tilde \beta \lt \beta _c$ and $\epsilon \gt 0$ there is a coupling of $\mathscr{G}_n$ and a killed branching random walk $\mathscr{T}_{0,1}({-X})$ with intensity measure

\begin{equation*}\tilde \pi (dx)=\tilde \beta (e^{\gamma x} {\unicode{x1D7D9}}_{x\gt 0} + e^{(1-\gamma ) x} {\unicode{x1D7D9}}_{x\lt 0}) \, dx\end{equation*}

and standard exponential $X$ such that, for sufficiently large $n$ ,

\begin{equation*}\mathbb{P}\big ( | \mathscr{C}_n(O_n)| \gt |\mathscr{T}_{0,1}(-X)|\big ) \leq \mathbb{P}\big ( \exists x \in \mathscr{T}_{0,1}(-X) \text{ with } V(x) \leq -(1-\epsilon )\log n \big ).\end{equation*}

The second step is to show that the probability that the killed branching random walk has a particle $x$ with location $V(x) \leq -(1-\epsilon )\log n$ or that its total progeny contains substantially more than $n^{\rho _-}$ points is sufficiently small.

Proposition 13. For all $\epsilon \gt 0$ and sufficiently large $n$ we have

  1. (a) $\displaystyle \mathbb{P}\big ( \exists x \in \mathscr{T}_{0,1}(-X) \text{ with } V(x) \leq -(1-\epsilon )\log n \big ) \leq n^{-\rho _++\epsilon }.$

  2. (b) $\displaystyle \mathbb{P} \big ( |\mathscr{T}_{0,1}(-X)| \ge n^{\rho _-+\epsilon } \big ) \leq n^{-\rho _++\epsilon }.$

For convenience we prove Proposition 13 in Section 3.2. for the original intensity measure $\pi$ , but of course the result can be applied to $\tilde \pi$ with a density parameter $\beta \lt \tilde \beta \lt \beta _c$ so close to $\beta$ that the difference of the resulting $\rho _\pm$ to the original value is as small as required.

Combining Propositions 12 and 13 we get, for all $\epsilon \gt 0$ and sufficiently large $n$ , that

\begin{align*} \mathbb{P} \big (|\mathscr{C}_n(O_n)| \ge n^{\rho _-+\epsilon } \big ) &\leq \mathbb{P} \big (|\mathscr{C}_n(O_n)| \ge |\mathscr{T}_{0,1}(-X)| \big ) + \mathbb{P} \big ( |\mathscr{T}_{0,1}(-X)| \ge n^{\rho _-+\epsilon } \big )\\ & \leq 2n^{-\rho _++\epsilon }. \end{align*}

Finally combining this with (7) we infer that

\begin{align*} \mathbb{P} \big (| S^{\text{max}}_n| \ge n^{\rho _-+2\epsilon }\big ) & \leq n^{1-\rho _-{-}2\epsilon } \mathbb{P} \big ( | \mathscr{C}_n(O_n)| \ge n^{\rho _-+2\epsilon }\big ) \\ & {\leq n^{1-\rho _-{-}2\epsilon } \mathbb{P} \big ( | \mathscr{C}_n(O_n)| \ge n^{\rho _-+\epsilon }\big )} \\ & \leq 2n^{1-(\rho _-+\rho _+)-\epsilon }= 2n^{-\epsilon } \to 0, \end{align*}

as required.

3.1 Proof of Proposition 12

For the coupling we sample a killed branching random walk $\mathscr{T}_{0,1}(-X)$ started in $-X$ with intensity measure $\tilde \pi$ which has a slightly increased (but still subcritical) density parameter $\tilde \beta \gt \beta$ . All particles on the positive halfline and their descendants are killed. We use the projection $\pi _n$ defined in (2) to project all particle locations on the negative half-line onto vertices in $\{1,\ldots ,n\}$ and retain all edges as in the genealogical tree of $\mathscr{T}_{0,1}(-X)$ . We denote the resulting multigraph by $\mathscr{K}_n$ . To prove that this coupling has the properties claimed in Proposition 12 we use the following lemma.

Lemma 14. There exists $n_0\in \mathbb{N}$ such that, for all sufficiently large $n$ ,

  1. (i) for all $n \ge m,r \geq n_0$ with $m \not =r$ the probability that a particle $x$ in location $V(x)$ with $\pi _n(V(x))=r$ has an offspring $y$ with location $V(y)$ satisfying $\pi _n(V(y))=m$ is at least

    \begin{equation*}\beta (r\wedge m)^{-\gamma } (r\vee m)^{\gamma -1}.\end{equation*}
  2. (ii) for all $n \ge r \gt n_0$ the probability that a particle $x$ in location $V(x)$ with $\pi _n(V(x))=r$ has at least one offspring $y$ with location $V(y)$ satisfying $\pi _n(V(y))\leq n_0$ is at least

    \begin{equation*}1-\prod _{m=1}^{n_0} (1-\beta m^{-\gamma }r^{\gamma -1}).\end{equation*}

When the killed branching random walk is sampled we first check whether it has a particle $y$ with location $V(y)$ satisfying $\pi _n(V(y))\leq n_0$ . If this is the case then, for sufficiently large $n$ , we have $V(y) \leq -(1-\epsilon )\log n$ . If this is not the case, then all particle locations of $\mathscr{T}_{0,1}(-X)$ are projected onto vertices with index at least $n_0$ . Using Lemma 14 to compare the edge probabilities, we see that in this case $\mathscr{G}_n$ is dominated by $\mathscr{K}_n$ , which proves Proposition 12.

Proof of Lemma 14 (i). The probability of the event that a fixed particle $x$ in location $V(x)$ with $\pi _n(V(x))=r$ has an offspring $y$ with location $V(y)$ satisfying $\pi _n(V(y))=m$ equals

(8) \begin{equation} 1-\exp \Bigg (-\tilde \pi \Bigg( -\sum _{k=m}^n \frac 1k -V(x), -\sum _{k=m+1}^n \frac 1k-V(x)\Bigg]\Bigg). \end{equation}

As $\pi _n(V(x))=r$ we have

\begin{equation*}-\sum _{k=r}^n \frac 1k \lt V(x) \leq -\sum _{k=r+1}^n \frac 1k.\end{equation*}

The probability in (8) is therefore smallest when $V(x)=-\sum _{k=r+1}^n \frac 1k$ . It therefore remains to show that there exists $n_0\in \mathbb{N}$ such that, for $n_0\leq m\lt r$ , we have

(9) \begin{equation} 1-\exp \Bigg(-\tilde \pi \Bigg( -\sum _{k=m}^{r} \frac 1k, -\sum _{k=m+1}^{r}\frac 1k\Bigg]\Bigg) \geq \beta m^{-\gamma }r^{\gamma -1}, \end{equation}

and, for $n_0\leq r\lt m$ , we have

(10) \begin{equation} 1-\exp \Bigg(-\tilde \pi \Bigg(\sum _{k=r+1}^{m-1} \frac 1k, \sum _{k=r+1}^{m} \frac 1k\Bigg]\Bigg) \geq \beta m^{\gamma -1}r^{-\gamma }. \end{equation}

For (9) let $\beta \lt \beta '\lt \tilde \beta$ and hence

\begin{align*} \tilde \pi \Bigg(\!-\sum _{k=m}^{r} \frac 1k, -\sum _{k=m+1}^{r}\frac 1k\Bigg] & = \frac {\tilde \beta }{1-\gamma }\exp \Bigg({-(1-\gamma )\sum _{k=m+1}^{r} \frac 1k} \Bigg)( e^{\frac {1-\gamma }{m+1}}-1)\\ & \geq \frac {\tilde \beta }{m+1} \exp \Big (-(1-\gamma ) \left(\log \left(\tfrac {r}m\right) + \tfrac {C}{n_0}\right) \Big )\\[2mm] & \geq \beta ' m^{-\gamma }r^{\gamma -1}, \end{align*}

for some constant $C\gt 0$ , if $n_0\leq m\lt r$ for a suitable $n_0\in \mathbb{N}$ . Hence, using that $1-e^{-x}\ge (\beta /\beta ')x$ for sufficiently small $x$ , we get

\begin{align*} 1-\exp \left(\!-\tilde \pi \left(-\sum _{k=m}^{r} \frac 1k, -\sum _{k=m+1}^{r}\frac 1k\right]\right) & \geq \beta m^{-\gamma }r^{\gamma -1}. \end{align*}

The calculation giving (10) is analogous.

Proof of Lemma 14 (ii). For $\beta \lt \beta '\lt \beta ''\lt \tilde \beta$ we have

\begin{align*} \tilde \pi \left ( -\infty , -\sum _{k=n_0+1}^{r}\frac 1k\right] & = \frac {\tilde \beta }{1-\gamma }\exp \left ({-(1-\gamma )\sum _{k=n_0+1}^{r} \frac 1k} \right)\\ & \geq \frac {\tilde \beta }{1-\gamma } \exp \left (-(1-\gamma ) \left(\log \left(\tfrac {r}{n_0}\right) + \tfrac {C}{n_0}\right) \right)\\[2mm] & \geq \frac {\beta ''}{1-\gamma } n_0^{1-\gamma }r^{\gamma -1} \geq \beta ' \sum _{m=1}^{n_0} m^{-\gamma }r^{\gamma -1}, \end{align*}

for some constant $C\gt 0$ and $n_0\in \mathbb{N}$ sufficiently large such that $\tilde \beta e^{-(1-\gamma )C/n_0}\ge \beta ''$ . Hence

\begin{equation*}1-\exp \left(-\tilde \pi \left( -\infty , -\sum _{k=n_0+1}^{r}\frac 1k\right]\right) \geq 1- \prod _{m=1}^{n_0} e^{-\beta ' m^{-\gamma }r^{\gamma -1}}.\end{equation*}

Using that $e^{-x}\le 1-(\beta /\beta ')x$ for sufficiently small $x$ the result follows.

3.2 Proof of Proposition 13

This section is concerned with the large deviations results for the killed branching random walk. The results here are rough versions of the very fine asymptotic results presented in [Reference Aïdékon, Hu and Zindy1]. The key difference is that our branching random walk has an infinite offspring distribution so that the moment requirements that are crucially used in [Reference Aïdékon, Hu and Zindy1] are not satisfied here. Instead we use the moment bound on

(11) \begin{equation} W_n=\sum _{|x|=n} e^{-\rho _- V(x)}, \end{equation}

which we received in Lemmas 7 and 8 exploiting the Poisson property of the offspring distribution. Recall that $\mathbb{P}_u, \mathbb{E}_u$ refer to initial particles in position $\log u$ . Shifting $\log u$ to the origin in Lemma 8 we get, for $1\lt p\lt \rho _+/\rho _-$ that

(12) \begin{equation} \mathbb{E}_u \big [W_n^p\big ] \leq C u^{-p\rho _-}, \end{equation}

for some constant $C\gt 0$ . We now look at several generations and allow the starting point to be a uniform random variable, note that the expectation $\mathbb{E}$ refers to exactly this situation.

Lemma 15. For $1\lt p\lt \rho _+/\rho _-$ we have, for all $N\in \mathbb{N}$ , that

\begin{equation*}\mathbb{E} \left[\left(\sum _{n=1}^N W_n \right)^p\right] \leq C N^{p+1}.\end{equation*}

Proof. We first estimate

\begin{equation*}\left( \sum _{n=1}^N W_n \right)^p \leq N^p \max _{n=1}^N W_n^p \leq N^p \sum _{n=1}^N W_n^p.\end{equation*}

By (12) we infer from this that

\begin{equation*}\mathbb{E}_u\left[\left( \sum _{n=1}^N W_n \right)^p \right] \leq N^p \sum _{n=1}^N \mathbb{E}_u\big [W_n^p\big ] \leq C N^{p+1} u^{-p\rho _-}.\end{equation*}

Now take $u$ uniformly random, split according to its value and apply the above, to get

\begin{align*} \mathbb{E} \left[\left( \sum _{n=1}^N W_n \right)^p\right] & \leq \sum _{j=0}^\infty \mathbb{P}\left( j\leq -\log U \lt j+1\right) \mathbb{E}_{e^{-j-1}}\left[ \left( \sum _{n=1}^N W_n \right)^p \right]\\ & \leq C N^{p+1} \sum _{j=0}^\infty e^{-j+p\rho _-(j+1)} \\ & \leq C N^{p+1} \frac {e^{-p\rho _-} }{1-e^{p\rho _-{-}1}}, \end{align*}

as $p\rho _-\lt 1$ . This completes the proof.

The next auxiliary lemma we need for the proof of Proposition 13 is an easy large deviations bound for the position of the leftmost particle in the branching random walk. Recall that $t^*\in (\rho _-, \rho _+)$ is uniquely defined as the solution of

\begin{equation*} \frac 1{t^*} \log \psi (t^*) = \frac {\psi '(t^*)}{\psi (t^*)}.\end{equation*}

Lemma 16. For every $0\lt \delta \lt -\frac {\psi '(t^*)}{\psi (t^*)}$ there exists $J(\delta )\gt 0$ such that

\begin{equation*}\mathbb{P}_1 \left( \min _{|x|=N} V(x) \leq \delta N \right) \leq e^{-N {J(\delta )}}.\end{equation*}

Proof. We pick $\rho _-\lt t\lt t^*$ and by strict convexity of $\log \psi$ we get

\begin{equation*} \frac 1{t} \log \psi (t) \lt \frac {\psi '(t^*)}{\psi (t^*)}.\end{equation*}

Now we use the exponential Chebyshev inequality to get

\begin{align*} \mathbb{P}_1 \left( \min _{|x|=N} V(x) \leq \delta N \right) & \leq \mathbb{P}_1 \left( e^{-t \min \limits _{|x|=N} V(x)} \geq e^{-t\delta N} \right)\\ & \leq e^{t\delta N} \mathbb{E}_1 \left[ \sum _{|x|=N} e^{-t V(x)} \right] =\exp \left ( tN(\delta + \frac 1t \log \psi (t) \right )\\ & \leq \exp \left( tN\left (\delta + \frac {\psi '(t^*)}{\psi (t^*)} \right )\right). \end{align*}

Now we let ${J(\delta )}=-\delta -\frac {\psi '(t^*)}{\psi (t^*)} \gt 0$ to get the desired result.

Combining the last two lemmas gives us the main step in the proof of Proposition 13. Let $\tilde {W}_n$ be as in (11) but with the sum restricted to the particles of the killed branching random walk.

Lemma 17. We have, for every $\epsilon \gt 0$ and all sufficiently large $n$ ,

\begin{equation*}\mathbb{P} \left( \sum _{k=0}^{\infty } {\tilde {W}_k} \ge n^{\rho _-+\epsilon } \right) \leq n^{-\rho _++\epsilon }.\end{equation*}

Proof. We fix $N\in \mathbb{N}$ arbitrarily and determine the value we require later. Then we split the left-hand side according to survival up to generation $N$ . This yields

\begin{align*} {\mathbb{P} \left( \sum _{k=0}^{\infty } \tilde {W}_k \ge n^{\rho _-+\epsilon } \right) } & \leq \mathbb{P} \left( \sum _{k=0}^{N-1} W_k \ge n^{\rho _-+\epsilon } \right) + \mathbb{P} \left ( {\tilde {W}_N}\gt 0 \right ) \\ & \leq n^{-p(\rho _-+\epsilon )} \mathbb{E} \left[ \left( \sum _{k=0}^{N-1} W_k \right)^p \right] + \mathbb{P} \left( \min _{|x|=N} V(x) \leq 0 \right) \\ & \leq CN^{p+1} \, n^{-p(\rho _-+\epsilon )} + \mathbb{P}_1 \left( \min _{|x|=N} V(x) \leq X \right), \end{align*}

using Lemma 15 in the last step. We pick $0\lt \delta \lt -\frac {\psi '(t^*)}{\psi (t^*)}$ and get, by Lemma 16,

\begin{align*} \mathbb{P}_1 \bigg( \min _{|x|=N} V(x) \leq X \bigg) & \leq \mathbb{P}_1 \bigg( \min _{|x|=N} V(x) \leq \delta N \bigg) + \mathbb{P}_1 \big ( X \geq \delta N\big ) \leq e^{-N ({J(\delta )}+\delta )}. \end{align*}

Setting $N =\lceil (\log n) \frac {\rho _+}{{J(\delta )}+\delta } \rceil$ completes the proof.

Proof of Proposition 13 (a). Let $\mathscr{Z}_k$ be the set of locations of the $k$ th generation in the killed branching random walk. Then if there is $x \in \mathscr{Z}_k$ with $V(x) \leq -(1-\epsilon )\log n$ we have ${\tilde {W}_k} \ge e^{\rho _- (1-\epsilon )\log n}=n^{\rho _- (1-\epsilon )}$ . Hence

\begin{align*} \mathbb{P}\left ( \exists x \in \mathscr{T}\,(\!-X) \text{ with } V(x) \leq -(1-\epsilon )\log n \right ) \leq \mathbb{P}\left( \sum _{k=0}^\infty {\tilde {W}_k} \geq n^{\rho _- (1-\epsilon )} \right), \end{align*}

so that the required bound holds by Lemma 17.

Proof of Proposition 13 (b). We first replace the total population size of the killed branching random by the sum of weighted particles with the same starting point,

\begin{equation*}|\mathscr{T}_{0,1}(-X)| \leq \sum _{n=0}^\infty {\tilde {W}_n},\end{equation*}

using that in the sum defining $\tilde {W}_n$ all particles located to the left of the origin get weight at least one. Hence

\begin{align*} \mathbb{P} \left ( |\mathscr{T}_{0,1}(-X)| \ge n^{\rho _-+\epsilon } \right) & \leq \mathbb{P} \left( \sum _{n=0}^{\infty } {\tilde {W}_n} \ge n^{\rho _-+\epsilon } \right), \end{align*}

and again the required bound holds by Lemma 17.

References

Aïdékon, E., Hu, Y. and Zindy, O. (2013) The precise tail behavior of the total progeny of a killed branching random walk. Ann. Probab. 41 37863878.CrossRefGoogle Scholar
Biggins, J. D. (1977) Martingale convergence in the branching random walk. J. Appl. Probab. 14 2537.CrossRefGoogle Scholar
Bollobás, B., Janson, S. and Riordan, O. (2007) The phase transition in inhomogeneous random graphs. Random Struct. Algorithms 31 3122.CrossRefGoogle Scholar
Banerjee, S., Bhamidi, S., van der Hofstad, R. and Ray, R. (2025) Non-equilibrium coagulation processes and subcritical percolation on evolving networks. ArXiv preprint, 2512.15561.Google Scholar
Dereich, S. and Mörters, P. (2013) Random networks with sublinear preferential attachment: The giant component. Ann. Probab. 41 329384.CrossRefGoogle Scholar
Iksanov, A., Liang, X. and Liu, Q. (2019) On Lp-convergence of the Biggins martingale with complex parameter. J. Math. Anal. Appl. 479 16531669.10.1016/j.jmaa.2019.07.017CrossRefGoogle Scholar
Janson, S. (2008) The largest component in a subcritical random graph with a power law degree distribution. Ann. Appl. Probab. 18 16511668.CrossRefGoogle Scholar
Kyprianou, A. (2000) Martingale convergence and the stopped branching random walk. Probab. Theory Relat. Fields 116 405419.10.1007/s004400050256CrossRefGoogle Scholar
Last, G. and Penrose, M. (2018) Lectures on the Poisson Process, Volume 7 of IMS Textbook. Cambridge University Press.Google Scholar
Lyons, R. (1997) A simple path to Biggins’ martingale convergence for branching random walk. In Classical and Modern Branching Processes (Athreya, K. B. and Jagers, P., eds), Springer, pp. 217221.10.1007/978-1-4612-1862-3_17CrossRefGoogle Scholar
Mörters, P. (2022) Lecture notes on random graphs. Available at https://www.mi.uni-koeln.de/~moerters/lectures/RandomGraphs.pdf.Google Scholar
Mörters, P. and Schleicher, N. (2024) Early typical vertices in subcritical random graphs of preferential attachment type. In AofA 2024, Volume 302 of LIPIcs (Mailler, C. and Wild, S., eds), pp. 14:1, 14:10,Google Scholar
Nerman, O. (1981) On the convergence of supercritical general (C-M-J) branching processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 57 365395.10.1007/BF00534830CrossRefGoogle Scholar
Ray, R. (2024) Stochastic Processes on Preferential Attachment Models: Understanding Global Structures from Local Properties (PhD Thesis). Eindhoven University of Technology, Mathematics and Computer Science.Google Scholar
Shi, Z. (2016) Branching Random Walks: École d’Été de Probabilités de Saint-Flour XLII – 2012. Lecture Notes in Mathematics. Springer.Google Scholar
Figure 0

Figure 1. Illustration of Proposition 6: The vertices $u_1, \ldots , u_4$ are successively explored, the exploration of $u_1$ is depicted. The exploration yields particles in the entire interval $[bum,m]$ but only the red particles located in $[bm,m]$ are included in $\mathcal{X}_1$. A logarithmic scale is used on the abscissa.

Figure 1

Figure 2. Branching particles are marked in blue. The positions on $[0,\infty )$ of the frozen particles, which are marked in red, yield the point process $\xi$.

Figure 2

Algorithm 1 Branching Random Walk Exploration (π,u,m)