Hostname: page-component-77f85d65b8-45ctf Total loading time: 0 Render date: 2026-04-16T01:52:50.284Z Has data issue: false hasContentIssue false

A star is born: Explosive Crump–Mode–Jagers branching processes

Published online by Cambridge University Press:  13 April 2026

Bas Lodewijks*
Affiliation:
Universität Augsburg and University of Sheffield
*
*Postal address: Faculty of Mathematics, Natural Sciences, and Materials Engineering, Universität Augsburg, Universitätsstraße 2, 86159 Augsburg.
Rights & Permissions [Opens in a new window]

Abstract

We study a family of Crump–Mode–Jagers branching processes in a random environment that explode, i.e. that grow infinitely large in finite time with positive probability. Building on recent work of Iyer and the author (‘On the structure of genealogical trees associated with explosive Crump–Mode–Jagers branching processes’, arXiv:2311.14664, 2023), we weaken certain assumptions required to prove that the branching process, at the time of explosion, contains a (unique) individual with infinite offspring. We then apply these results to super-linear preferential attachment models. In particular, we fill gaps in some of the cases analysed in Appendix A of the work of Iyer and the author and study a large range of previously unattainable cases.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In a Crump–Mode–Jagers (CMJ) branching process (named after [Reference Crump and Mode4, Reference Crump and Mode5, Reference Jagers12]), an ancestral root individual produces offspring according to a collection of points on the non-negative real line. Each individual ‘born’ produces offspring according to an identically distributed collection of points, translated by its birth time. One is generally interested in properties of the population as a function of time. Classical work from the 1970s and 1980s related to this model generally deals with the Malthusian case, which, informally, refers to the fact that the population grows exponentially in time; see, e.g., [Reference Athreya and Ney3, Reference Jagers13] and the references therein for an overview of classical CMJ theory.

Far fewer results exist for CMJ branching processes when a Malthusian parameter does not exist. One family of CMJ branching processes for which this is the case is the family of explosive CMJ branching processes. Here, the total progeny of the branching process grows infinitely large in finite time with positive probability. Early work on such CMJ branching processes by Sevast’yanov [Reference Sevast’yanov19, Reference Sevast’yanov20], Grey [Reference Grey6], Grishechkin [Reference Grishechkin7], and Vatutin [Reference Vatutin21] concerns necessary and sufficient conditions for (non-)explosion of Bellman–Harris processes (a special type of CMJ branching process). More recent work on this topic distinguishes between processes where individuals produce finitely many offspring in finite time almost surely and processes where individuals can produce infinite offspring themselves in finite time. In the former case, Komjáthy [Reference Komjáthy14] provides general criteria for the explosion of CMJ branching processes in terms of solutions of a functional fixed point equation, and also extends the necessary and sufficient criteria for explosion in branching random walks in [Reference Amini, Devroye, Griffiths and Olver1]. In the latter case, Oliveira and Spencer [Reference Oliveira and Spencer16], and, more recently, Iyer [Reference Iyer10] study a particular family of explosive CMJ branching processes with exponential inter-birth times, and Sagitov and Lindo [Reference Sagitov17, Reference Sagitov and Lindo18] consider Bellman–Harris processes where individuals, on death, may produce infinite offspring.

In a recent work, Iyer and the author [Reference Iyer and Lodewijks11] investigate a family of explosive CMJ branching processes in a random environment. Here, each individual born in the branching process is assigned an independent identically distributed (i.i.d.) random weight, and the distribution of the point process that governs the offspring of the individual depends on its weight. Sufficient conditions for the emergence of infinite stars (individuals that produce infinite offspring) and infinite paths (an infinite lineage of descendants) in the branching process, stopped at the (finite) time of explosion, are formulated in a general set-up. Other sufficient criteria for explosion for CMJ processes in a random environment are also studied in [Reference Iyer9].

1.1. Overview of our contribution

In this paper, we study the emergence of infinite stars in explosive CMJ processes in a random environment. In particular, we weaken certain assumptions provided in previous work of Iyer and the author [Reference Iyer and Lodewijks11], under which we prove that an infinite star appears almost surely.

We use these results in an application to super-linear preferential attachment trees with fitness. This model consists of a sequence of discrete-time trees in which vertices are assigned vertex-weights and arrive one by one. A new vertex connects to a random vertex in the tree, selected with a preference for vertices with high degree or vertex-weight, or both. We largely extend the class of models for which such results can be proved. This significantly improves on the results in [Reference Iyer and Lodewijks11], and also adds to the recent work of Iyer [Reference Iyer10] on persistence in (super-linear) preferential attachment trees (without fitness).

1.2. Model definition

We consider individuals in the process as being labelled by elements of the infinite Ulam–Harris tree $\mathcal{U}_\infty \;:\!=\; \bigcup_{n \geq 0} \mathbb{N}^{n}$ ; where $\mathbb{N}^{0} \;:\!=\; \{\varnothing\}$ contains a single element, which we call the root. We denote elements $u \in \mathcal{U}_{\infty}$ as tuples, so that if $u = (u_{1}, \ldots, u_{k}) \in \mathbb{N}^{k}, k \geq 1$ , we write $u = u_{1} \cdots u_{k}$ . An individual $u = u_1u_2\cdots u_k$ is to be interpreted recursively as the $u_k$ th child of the individual $u_1 \cdots u_{k-1}$ ; for example, $1, 2, \ldots$ represent the offspring of $\varnothing$ . Further, for individuals $u=(u_1,\ldots, u_k)$ and $v=(v_1,\ldots, v_\ell)$ , we let $uv=(u_1,\ldots, u_k,v_1,\ldots, v_\ell)$ denote their concatenation. Suppose that $(\Omega, \Sigma, \mathrm{P})$ is a complete probability space and $(S, \mathcal{S})$ is a measure space. We also equip $\mathcal{U}_{\infty}$ with the sigma algebra generated by singleton sets. Then we fix random mappings $X\;:\; \Omega \times \mathcal{U}_{\infty} \rightarrow [0, \infty]$ , $W\;:\; \Omega \times \mathcal{U}_{\infty} \rightarrow S$ , and define $(X, W)\;:\; \Omega \times \mathcal{U}_{\infty} \rightarrow [0, \infty] \times S$ , so that $(\omega,u) \mapsto ((X(u))(\omega), W_{u}(\omega))$ . In general, for $u \in \mathcal{U}_{\infty}$ and $j\in \mathbb{N}$ , we interpret $W_{u}$ as a ‘weight’ associated with u, and X(uj) the waiting time or inter-birth times between the birth of the $(j-1)$ th and jth child of u.

We use the values of X to associate birth times $\mathcal{B}(u)$ to individuals $u \in \mathcal{U}_{\infty}$ . In particular, we define $\mathcal{B}\;:\; \Omega \times \mathcal{U}_{\infty} \rightarrow [0, \infty]$ recursively, as

We note that a value of $X(ui) = \infty$ indicates that the individual u has stopped producing offspring, and produces at most $i-1$ children.

We introduce some notation related to elements $u \in \mathcal{U}_{\infty}$ . We use $|\cdot|$ to measure the length of a tuple u, so that $|u|=0$ when $u=\varnothing$ and $|u|=k$ when $u = u_{1} \cdots u_{k}$ . If, for some $x \in \mathcal{U}_{\infty}$ , we have $x = u v$ , we say u is an ancestor of x. Further, given $\ell \leq |u|$ , we write $u_{|_\ell} \;:\!=\; u_{1} \cdots u_{\ell}$ for the different ancestors of u. We equip $\mathcal{U}_{\infty}$ with the lexicographic total order $\leq_{L}$ . Given elements u, v, we say $u \leq_{L} v$ if either u is an ancestor of v, or $u_{\ell} < v_{\ell}$ where $\ell = \min \left\{i \in \mathbb{N}\;:\; u_{i} \neq v_{i} \right\}$ . For $u \in \mathcal{U}_{\infty}$ , we let $\mathcal{P}_{i}(u)$ denote the time, after the birth of u, required for u to produce i offspring. That is,

(1.1) \begin{equation}\mathcal{P}_{i}(u) \;:\!=\; \sum_{j=1}^{i} X(uj) \quad \text{and} \quad \mathcal{P}(u) \;:\!=\; \mathcal{P}_{\infty}(u) = \sum_{j=1}^{\infty} X(uj).\end{equation}

We use the notation $\mathcal{P}_{i}$ and $\mathcal{P}$ to denote i.i.d. copies of $\mathcal{P}_{i}(u)$ and $\mathcal{P}(u)$ , respectively.

For $t\geq 0$ , we set $\mathscr{T}_{t} \;:\!=\; \{x \in \mathcal{U}_\infty\;:\; \mathcal{B}(x) \leq t\}$ and identify $\mathscr{T}_{t}$ as the genealogical tree of individuals with birth time at most t. For a given choice of X, W, we say $(\mathscr{T}_{t})_{t \geq 0}$ is the genealogical tree process associated with an (X, W)-CMJ branching process; often, we refer to $(\mathscr{T}_{t})_{t \geq 0}$ directly as an (X, W)-CMJ branching process, viewed as a stochastic process in t.

With regards to the process $(\mathscr{T}_{t})_{t \geq 0}$ , we define the stopping times $(\tau_{k})_{k \in \mathbb{N}_{0}}$ such that

\begin{equation*}\tau_{k} \;:\!=\; \inf\{t \geq 0\;:\; |\mathscr{T}_{t}| \geq k\},\end{equation*}

where we adopt the convention that the infimum of the empty set is $\infty$ . One readily verifies that $(|\mathscr{T}_{t}|)_{t \geq 0}$ is right-continuous, and thus $|\mathscr{T}_{\tau_{k}}| \geq k$ (when $\tau_k<\infty$ ). For each $k \in \mathbb{N}$ , we define the tree $\mathcal{T}_{k}$ as the tree consisting of the first k individuals in $\mathscr{T}_{\tau_{k}}$ ordered by birth time, breaking ties lexicographically. We call

\begin{equation*}\tau_{\infty} \;:\!=\; \lim_{k \to \infty} \tau_{k}\end{equation*}

the explosion time of the process, at which the branching process has grown infinitely large (given the process survives). We also define the tree $\mathcal{T}_{\infty} \;:\!=\; \bigcup_{k=1}^{\infty} \mathcal{T}_{k}$ .

Remark 1. With the more commonly used notation for CMJ branching processes, we assign a point process (denoted $\xi^{(u)}$ ) to each $u \in \mathcal{U}_{\infty}$ , and refer to the points $\sigma^{(u)}_{1} \leq \sigma^{(u)}_{2}, \ldots$ associated with this point process (in the notation used here, $\mathcal{B}(u1), \mathcal{B}(u2), \ldots$ ). We do not use this framework here, because this requires us to be able to write the measure $\xi^{(u)} = \sum_{i=1}^{\infty} \delta_{\sigma_{i}}$ , which requires us to impose $\sigma$ -finiteness assumptions on the point process (see, for example, [Reference Last and Penrose15, Corollary 6.5]). This $\sigma$ -finiteness is implied by the classical Malthusian condition but, in this general setting, we believe that it is easier to have a framework where we can directly refer to the points $\mathcal{B}(u1), \mathcal{B}(u2), \ldots$

1.3. Notation

Throughout the paper we use the following notation. We let $\mathbb{N}\;:\!=\;\{1,2,\ldots\}$ and set $\mathbb{N}_0\;:\!=\;\{0,1,\ldots\}$ and $\mathbb R_+\;:\!=\;[0,\infty)$ . For $n\in\mathbb{N}$ , we set $[n]\;:\!=\;\{1,\ldots, n\}$ . For $x\in\mathbb R$ , we let $\lceil x\rceil\;:\!=\;\inf\{n\in\mathbb Z\;:\; n\geq x\}$ and $\lfloor x\rfloor\;:\!=\;\sup\{n\in\mathbb Z\;:\; n\leq x\}$ . For sequences $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ , such that $b_n$ is positive for all n, we say that $a_n=o(b_n)$ , $a_n=\mathcal{O}(b_n)$ , and $a_n=\Theta(b_n)$ if $\lim_{n\to\infty} a_n/b_n=0$ , if there exists a constant $C>0$ such that $|a_n|\leq Cb_n$ for all $n\in\mathbb{N}$ , and if $a_n=\mathcal{O}(b_n)$ and $b_n=\mathcal{O}(a_n)$ , respectively. For random variables $X,(X_n)_{n\in\mathbb{N}}$ , we let $X_n\overset{\mathrm d}{\longrightarrow} X, X_n\overset{\mathrm{P}}{\longrightarrow} X$ , and $X_n\overset{\mathrm{a.s.}}{\longrightarrow} X$ denote convergence in distribution, probability, and almost sure convergence of $X_n$ to X, respectively. For a non-negative real-valued random variable X and $\lambda > 0$ , we let

\[\mathcal{M}_{\lambda}(X) \;:\!=\; \mathrm{E}[\!\exp(\lambda X)] \quad \text{and} \quad \mathcal{L}_{\lambda}(X) \;:\!=\; \mathrm{E}[\!\exp(\!-\lambda X)].\]

Finally, for real-valued random variables X, Y, we say that $X \preceq Y$ if Y stochastically dominates X.

1.4. Structure of the paper

We present the main results in Section 2 and discuss applications in Section 2.3. We prove the main results and applications in Section 3. Section 4 finally investigates a number of examples.

2. Results

Throughout this paper, we make the following general assumptions, which are common when studying (explosive) (X, W)-CMJ processes. We assume that the pairs $((X(uj))_{j \in \mathbb{N}}, W_{u})$ with $u\in\mathcal{U}_\infty$ are i.i.d. For a given $w \in S$ , we let $(X_{w}(uj))_{j \in \mathbb{N}}$ denote a sequence $(X(uj))_{j \in \mathbb{N}}$ , conditionally on the weight $W_{u} = w$ . We also assume throughout that, for any $w \in S$ and any $u\in\mathcal{U}_\infty$ , the random variables $(X_w(ui))_{i\in\mathbb{N}}$ are mutually independent.

2.1. Sufficient criteria for infinite star in CMJ branching processes in random environment

Our main interest and aim is to understand how an explosive CMJ branching process explodes. We determine sufficient conditions under which the branching process explodes due to an individual giving birth to an infinite number of children in finite time (an infinite star). The assumptions are as follows.

Assumption 1. $\;$

  1. (i) There exist positive random variables $(Y_{n})_{n \in \mathbb{N}_{0}}$ with finite mean such that, for any $w \in S$ and all $n\in\mathbb{N}_0$ ,

    (2.1) \begin{equation}\sum_{i=n+1}^{\infty} X_{w}(i) \preceq Y_{n}.\end{equation}
  2. (ii) There exists an increasing sequence $(\lambda_n)_{n\in\mathbb{N}}\subset(0,\infty)$ such that $\lim_{n\to\infty}\lambda_n=\infty$ and

    \begin{equation*}\sum_{n=1}^\infty \mathcal{M}_{\lambda_n}(Y_n) \mathcal{L}_{\lambda_n}(\mathcal{P}_n(\varnothing))<\infty.\end{equation*}
  3. (iii) We have $ \mathrm{E}[\sup\{k\;:\; X(1)=\cdots=X(k) = 0\}] < 1$ . In addition, for each $n \in \mathbb{N}$ , almost surely,

    \begin{equation*}\sum_{i=n+1}^{\infty} X(i) > 0.\end{equation*}

Remark 2. Note that under Condition (i) of Assumption 1, we have $\mathrm{P}\left( \tau_\infty<\infty\right) =1$ , since, for example, the root $\varnothing$ produces an infinite number of children in finite time, almost surely.

Remark 3. In Assumption 1, we can consider Condition (i) as a uniform explosivity condition: regardless of its weight, the distribution of the time until an individual produces infinite offspring, after having already produced n children, is dominated by $Y_n$ . This assumption is essential to work around dependencies that arise because of the random environment (i.e. the weights of individuals).

Condition (ii) is used in the proof of Proposition 1, and provides an upper bound for the expected number of children that produce infinite offspring before their parent does.

Condition (iii) is a technical assumption, necessary to rule out certain trivial cases. Indeed, if, for example, $\mathrm{E}\left[\sup\{k\;:\;X(k)=0\}\right] > 1$ , the tree consisting of all the individuals born instantaneously at time 0 is a supercritical Bienaymé–Galton–Watson branching process. Hence, with positive probability, this tree is infinitely large, whilst it may not contain an individual with infinite offspring.

Remark 4. Condition (ii) weakens and combines two conditions in Assumption $2.2$ in [Reference Iyer and Lodewijks11]. There, it was assumed, with $\lambda_n\;:\!=\;c\mathrm{E}\left[Y_n\right]^{-1}$ and $c<1$ , that $\lambda_n$ increases and diverges, and that

\begin{equation*}\limsup_{n\to\infty}\mathcal{M}_{\lambda_n}(Y_n)<\infty \quad\text{and}\quad \sum_{n=1}^\infty \mathcal{L}_{\lambda_n}\left (\mathcal{P}_n(\varnothing) \right )<\infty.\end{equation*}

It is clear that these assumptions imply Condition (ii).

We then have our main result.

Theorem 1 (Infinite star). Under Assumption 1, the infinite tree $\mathcal{T}_{\infty}$ almost surely contains a node of infinite degree (an infinite star).

2.2. Application

We apply the result of Theorem 1 when the inter-birth times are exponentially distributed. In particular, this allows us to relate the results for (X, W)-CMJ branching processes to a family of recursively grown discrete trees, known as super-linear preferential attachment trees with fitness, introduced in [Reference Iyer and Lodewijks11]. This is a sequence of trees where vertices are introduced one by one and are assigned vertex-weights (fitness values). When a new vertex is introduced, it connects to one of the vertices already in the tree, where vertices with high degree or high fitness are more likely to make connections with new vertices.

We generally consider trees as being rooted, with edges directed away from the root; hence, the number of ‘children’ of a node corresponds to its out-degree. More precisely, given a vertex labelled v in a directed tree T, we let $\mathrm{deg}^+{v, T}$ denote its out-degree in T. We now define the preferential attachment with fitness model.

Definition 1. Suppose that $(W_{i})_{i \in \mathbb{N}}$ are i.i.d. copies of a random variable W that takes values in S, and let $f\colon\mathbb{N}_{0}\times S \rightarrow (0, \infty)$ denote the fitness function. A preferential attachment tree with fitness is the sequence of random trees $(T_{i})_{i \in \mathbb{N}}$ such that $T_{1}$ consists of a single node 1 with weight $W_{1}$ ; for $n \geq 2$ , $T_{n}$ is constructed, conditionally on $T_{n-1}$ , as follows.

  1. (i) Sample a vertex $j \in T_{n-1}$ with probability

    \begin{equation*}\frac{f(\mathrm{deg}^+{j, T_{n-1}}, W_j)}{\sum_{i=1}^{n-1} f(\mathrm{deg}^+{i,T_{n-1}}, W_i)}.\end{equation*}
  2. (ii) Connect j with an edge directed outwards to a new vertex n with weight $W_{n}$ .

The correspondence between preferential attachment trees with fitness and the (X, W)-CMJ process with exponential inter-birth times is as follows. For each $u\in\mathcal{U}_\infty$ , set $X(ui)\sim \mathrm{Exp}\left( f(i-1,W_u) \right)$ for $i\in\mathbb{N}$ . The trees $(\mathcal{T}_{i})_{i \in \mathbb{N}}$ associated with the (X, W)-CMJ process then satisfy

\begin{equation*}\{T_i\;:\;i\in\mathbb{N}\}\overset d=\{\mathcal{T}_i\;:\; i\in\mathbb{N}\}.\end{equation*}

In addition, with $T_\infty\;:\!=\;\cup_{n=1}^\infty T_n$ , we also have $T_\infty \overset d=\mathcal{T}_\infty$ . As a result, the structural properties of $\mathcal{T}_\infty$ can be translated to $T_\infty$ . The equality in distribution is a consequence of the memory-less property and the fact that the minimum of exponential random variables is also exponentially distributed, with a rate given by the sum of the rates of the corresponding variables; see, for example, [Reference Iyer8, Section 2.1]. The use of continuous-time embeddings of combinatorial processes was pioneered by Athreya and Karlin [Reference Athreya and Karlin2].

We require the following assumptions on the fitness function f:

(w*) \begin{equation}\exists w^*\in S\;:\; \forall w\in S, j\in \mathbb{N}\;:\; f(j,w)\geq f(j,w^*) \quad \text{and} \quad \sum_{j=0}^{\infty} \frac{1}{f(j, w^{*})} < \infty.\end{equation}

That is, there exists a minimizer $w^*\in S$ that, uniformly in $j\in \mathbb{N}$ , minimizes $f(j,\cdot)$ , and the reciprocals of $f(j,w^*)$ are summable. Whilst the latter condition is necessary to have an (X, W)-CMJ branching process where individuals can produce an infinite number of children in finite time (i.e. where an infinite star can appear), the former assumption is to avoid certain technicalities only. We also define the quantities

(2.2) \begin{equation} \mu_n^w\;:\!=\; \sum_{i=n}^\infty \frac{1}{f(i,w)}, \quad w\in \mathbb R_+,\ n\in \mathbb{N}, \quad \text{and set } \mu_n\;:\!=\;\mu_n^{w^*}.\end{equation}

Define $\mu_x$ for $x\in\mathbb R_+\setminus\mathbb{N}_0$ by linear interpolation. We then have the following corollary.

Corollary 1 (Infinite star in super-linear preferential attachment trees). Let $(T_{i})_{i \in \mathbb{N}}$ be a preferential attachment tree with fitness function f that satisfies Assumption (w* ). Let $(\lambda_n)_{n\in\mathbb{N}}\subset [0,\infty)$ be increasing and tend to infinity with n, such that there exists $N\in\mathbb{N}$ with $\lambda_n< f(i,w^*)$ for all $i\geq n\geq N$ . If

(2.3) \begin{equation} \sum_{n=N}^{\infty} \bigg(\prod_{i=n}^\infty \frac{f(i,w^*)}{f(i,w^*)-\lambda_n}\bigg)\mathrm{E}\bigg[\prod_{i=0}^{n-1}\frac{f(i,W)}{f(i,W) + \lambda_n}\bigg] < \infty,\end{equation}

then the tree $T_{\infty}$ contains a unique node of infinite degree and no infinite path, almost surely.

Remark 5. Though we present a corollary for the specific choice of exponentially distributed inter-birth times, similar results can be proved to hold when considering other distributions for the inter-birth times (e.g. beta, gamma, Rayleigh distributions).

The corollary follows by showing that the conditions in Assumption 1 are met. It is readily verified that Condition (iii) is satisfied, as the exponential distribution does not have an atom at zero. Setting

(2.4) \begin{equation} Y_n\;:\!=\;\sum_{i=n+1}^\infty \widetilde X_{w^*}(i),\end{equation}

where $\widetilde X_{w^*}(i)$ is an independent copy of $X_{w^*}(i)$ , and applying Assumption (w* ), Condition (i) follows. Finally, Condition (ii) is equivalent to (2.3) when the inter-birth times are exponentially distributed and $Y_n$ is as in (2.4). The uniqueness of the node of infinite degree and absence of an infinite path follows from [Reference Iyer and Lodewijks11, Theorem $2.12$ ] and is true, in general, for inter-birth time distributions that have no atoms.

We juxtapose Corollary 1 with the following result from [Reference Iyer and Lodewijks11].

Theorem 2 ([Reference Iyer and Lodewijks11, Theorem 3.4].) Recall $\mu^{w}_{n}$ from (2.2). Recall $\mu_n^w$ from (2.2). If, for some $c>1$ and all $w\geq 0$ , we have

\begin{equation*}\sum_{n=1}^\infty \mathrm{E}\bigg[\prod_{i=0}^\infty \frac{f(i,W)}{f(i,W)+c(\mu_n^w)^{-1}\log n}\bigg]=\infty,\end{equation*}

then $T_\infty$ contains a unique infinite path and no node of infinite degree, almost surely.

In the following subsection, we provide a range of examples for which the conditions in Corollary 1 can be satisfied, for which we use $\lambda_n=\delta \mu_n^{-1} \log n$ , where $\delta>0$ is a sufficiently small constant and $\mu_n$ is as in (2.2). For this choice of $\lambda_n$ , we see that Corollary 1 and Theorem 2 are close to being converse results. Indeed, we confirm this in a range of examples in the following subsection.

2.2.1. Super-linear preferential attachment with fitness.

We proceed by studying a family of preferential attachment trees (with fitness) for which we can verify the conditions provided in Corollary 1. In doing so, we extend the number of models for which it is known that $T_\infty$ contains a unique node of infinite degree from a handful of examples to larger, but more particular, class. This class consists of a family of multiplicative fitness functions. That is, vertex-weights take values in $S=\mathbb R_+$ , and $f(i,w)=g(w)s(i)$ for some functions $g\colon \mathbb R_+\to (0,\infty)$ and $s\colon \mathbb{N}_0\to (0,\infty)$ . To satisfy Assumption (w* ), we assume that

(2.5) \begin{equation} \exists w^*\in \mathbb R_+\ \forall w\in \mathbb R_+\;:\; g(w)\geq g(w^*)>0\quad \text{and}\quad \ \sum_{i=0}^\infty \frac{1}{s(i)}<\infty.\end{equation}

Our main interest is in functions s that grow barely faster than linear, for which the summability condition in (2.5) is only just satisfied. For example, $s(n)=n\log(n+2)\log\log(n+3)^\sigma$ , with $\sigma>1$ . Earlier work by Iyer and the author in [Reference Iyer and Lodewijks11] is only able to deal with cases for which s(n) grows fast enough, that is, faster than $n(\!\log n)^\alpha$ with $\alpha>2$ . Here, we extend this to a much wider range of functions that grow more slowly.

We introduce the following assumption on the function s.

Assumption 2. Suppose s satisfies (2.5). Moreover, suppose there exist $\beta\in(0,1)$ and $p\in(1,1+\beta)$ such that

\begin{equation*}\lim_{n\to\infty}\frac{s(n)}{n^\beta}=\infty\quad \text{and}\quad \lim_{n\to\infty}\frac{s(n)}{n^p}=0,\end{equation*}

and there exist $C>0$ and $N\in\mathbb{N}$ such that, for all $n\geq N$ ,

\begin{equation*}s(n)\leq C\frac{n^{1+\beta-p}}{\log n}\inf_{i\geq n}s(i).\end{equation*}

Remark 6. Though the first limit in Assumption 2 may seem unnecessary when s satisfies (2.5), one can construct examples of s with a sufficiently sparse summable subsequence that grows slowly. For example, with $\epsilon>0$ small,

\begin{equation*}s(n)=\begin{cases}n^{(1+\epsilon)/2} &\text{if } \sqrt n\in\mathbb{N}, \\(n+2)(\!\log (n+2))^2 &\text{otherwise.}\end{cases}\end{equation*}

Though this choice of s satisfies (2.5), the subsequence $s(n_k)$ with $n_k=k^2$ grows relatively slowly compared with the values s(n) when n is not a perfect square, so that the first limit in Assumption 2 is satisfied only for $\beta<\frac{1+\epsilon}{2}$ . The inequality in Assumption 2 quantifies how much slower such sparse slowly growing subsequences are allowed to grow. Overall, Assumption 2 thus allows us to deal with some choices of s with sparse slowly growing subsequences.

We then have the following result.

Theorem 3 (Barely super-linear preferential attachment with fitness). Suppose that $f(i,w)=g(w)s(i)$ , where s satisfies Assumption 2 and g and s satisfy (2.5). Suppose there exists a sequence $(k_n)_{n\in\mathbb{N}}$ and constants $\epsilon>0, \delta\in(0,\epsilon)$ , and $n_0\in\mathbb{N}$ , such that both

\begin{equation*}\mathrm{P}\left(g(W)>k_n\right)\leq n^{-(1+\epsilon)}, \quad\textit{for all }n\geq n_0,\end{equation*}

and

(2.6) \begin{equation} \lim_{n\to\infty} \frac{\mu_{(\delta\log(n)/(\mu_n k_n))^{1/\beta}}}{\mu_nk_n}=\infty\end{equation}

are satisfied, where $\beta$ is as in Assumption 2. Then, the limiting infinite tree $T_\infty$ contains a unique infinite star and no infinite path, almost surely.

Remark 7. The conditions in Assumption 2 allow us to construct a sequence $(\lambda_n)_{n\in\mathbb{N}}$ such that $\lambda_n<f(i,w)=g(w)s(i)$ for all $i\geq n$ and all n large, whilst (2.6) ensures that the summability condition in (2.3) is met.

2.3. Examples

To conclude the section, we present a (non-exhaustive) list of examples of functions g and s and vertex-weight distributions, such that we can verify the conditions of Theorem 3. Without loss of generality, we can take $g(w)=w+1$ , as one can change vertex-weight distribution accordingly for other choices of g. For each example, we have the following assumptions for the function s and the tail distribution of the vertex-weights. With $\sigma>1$ , $\kappa>0$ , $\nu\in(0,1)$ , $\gamma>1$ , and $\alpha\in(1/2,1]$ ,

\begin{equation*}\begin{aligned}({}&i) &&s(i)=(i+1) (\!\log(i+2) )^\sigma, \quad &&\mathrm{P}(W\geq x)=\Theta\big( \mathrm{e}^{-x^\kappa}\big),\\({}&ii) && s(i)=(i+1)\log(i+2)\exp\!( (\!\log\log(i+3) )^\nu ), \quad && \mathrm{P}(W\geq x)=\Theta \big(\!\exp\!\big(-\mathrm{e}^{(\!\log x)^\gamma}\big)\big),\\({}&iii) && s(i)=(i+1)\log(i+2) (\!\log\log(i+3) )^\sigma, \quad &&\mathrm{P}(W\geq x)=\Theta\big(\!\exp\!\big(-\mathrm{e}^{x^\kappa}\big)\big),\\({}&iv) && s(i)=\begin{cases}i^\alpha &\text{if } \sqrt{i}\in\mathbb{N},\\(i+1) \left (\log (i+2) \right )^\sigma &\text{otherwise,}\end{cases} \quad && \mathrm{P}(W\geq x)=\Theta \big(\mathrm{e}^{-x^\kappa}\big).\end{aligned}\end{equation*}

We then have the following result.

Theorem 4. Consider the four examples (i)–(iv). The infinite tree $T_\infty$ almost surely contains a unique infinite star and no infinite path when

\begin{equation*}(i)\ \ (\sigma-1)\kappa>1;\quad (ii)\ \ \nu\gamma>1;\quad (iii)\ \ (\sigma-1)\kappa>1; \quad (iv)\ \ (\sigma-1)\kappa>1.\end{equation*}

When these inequalities are reversed (i.e. when changing the $>$ to a $<)$ , almost surely, $T_\infty$ contains a unique infinite path and no infinite star.

Remark 8. For case (i), the condition for the existence of an infinite path was already known [Reference Iyer and Lodewijks11]. A sufficient condition for the existence of an infinite path was $(\sigma-1)\kappa>1+\tfrac1\kappa$ , which we sharpen to $(\sigma-1)\kappa>1$ here to close the gap in the phase transition. For the other cases, the conditions for the existence of both an infinite star and an infinite path are novel. For the existence of an infinite path, we verify conditions from [Reference Iyer and Lodewijks11].

2.3.1. Super-linear preferential attachment without fitness.

To conclude, we consider the case $g\equiv 1$ , so that $f(i,w)=s(i)$ . That is, we consider a model where the evolution of the tree does not depend on the vertex-weights. Here, recent work of Iyer [Reference Iyer10] shows that the limiting tree $T_\infty$ almost surely contains a unique infinite star when

(2.7) \begin{equation} \sum_{i=0}^\infty \frac{1}{s(i)}<\infty\quad\text{and}\quad \exists \kappa >0\ \forall n\in\mathbb{N}_0\;:\; \max_{i\leq n}\frac{s(i)}{i+1}\leq \kappa \frac{s(n)}{n+1}.\end{equation}

In the following result, we allow for functions s that do not meet the second condition, thus extending the family of models for which we know that a unique infinite star arises.

Corollary 2 (Barely super-linear preferential attachment). Let s satisfy (2.5) and Assumption 2, and suppose that

(2.8) \begin{equation} \lim_{n\to\infty}\frac{\mu_{ (\!\log(n)\mu_n^{-1})^{1/\beta}}}{\mu_n}=\infty,\end{equation}

where $\beta$ is as in Assumption 2. Then the limiting infinite tree $T_\infty$ contains a unique vertex with infinite degree and no infinite path, almost surely.

Remark 9. The condition in (2.8) implies the result by applying Theorem 3 to $g\equiv 1$ .

Though not entirely general, the conditions in Corollary 2 encompass a large class of functions s. The four cases in Theorem 4 (ignoring the condition on the vertex-weights) satisfy the conditions, for example. In fact, we believe that the condition in (2.8) is satisfied by any function s that satisfies Assumption 2. As we were unable to prove this, we included it as a condition in the Corollary (as well as in Theorem 3).

Corollary 2 provides, to some extent, more general conditions under which $T_\infty$ contains a unique infinite star, compared with (2.7). Indeed, we can construct examples such that the second condition in (2.7) is not satisfied but for which the weaker conditions in Corollary 2 are satisfied (see Case (iv) in Section 4). At the same time, the conditions in (2.7) can deal with a range of functions s that are not within the range of Corollary 2.

3. Proofs of Main Results

This section is dedicated to proving the results presented in Section 2. In Section 3.1, we focus on the main result; Theorem 1. Section 3.2 is dedicated to proving Theorem 3.

3.1. Proof of Theorem 1

We introduce the following terminology, used in the remainder of the section. Recall $\mathcal{P}_k$ and $\mathcal{P}$ from (1.1). For $a, b \in \mathcal{U}_{\infty}$ , we say,

We also say,

Finally, for $a \in \mathcal{U}_{\infty}$ with $|a| \geq 1$ , we say that $a= a_{1} \cdots a_{m}$ is $a_{1}$ -conservative if, for each $j \in \left\{2, \ldots, m\right\}$ , we have $a_{j} \leq a_{1}$ . We then have the following result.

Lemma 1. Under Assumption 1, there exist $\zeta < 1$ and $K = K(\zeta) > 0$ such that, for all $a_1 > K(\zeta)$ and all integers $m\in \mathbb{N}$ ,

\begin{equation*}\sum_{\substack{a:|a| = m\\ a\text{ is $a_1$-conservative}}}\!\!\!\!\!\!\!\!\!\!\!\!\! \mathrm{P}\left(a \text{ has at least $a_1$ children before } \varnothing \text{ explodes}\right) \leq \zeta^{m-1} \mathcal{M}_{\lambda_{a_1}}(Y_{a_1})\mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_{a_{1}}).\end{equation*}

Proof. Suppose that $a = a_1 \cdots a_m \in \mathcal{U}_\infty$ . For a to have at least $a_1$ children before the explosion of $\varnothing$ , all the ancestors of a (including a) need to be born, and then a needs to produce $a_1$ children. All this needs to happen before $\varnothing$ produces infinitely many children, starting to count from its $(a_1+1){\mathrm{st}}$ child. That is,

(3.1) \begin{equation} \mathrm{P}\left(a \text{ has at least $a_1$ children before } \varnothing \text{ explodes}\right)= \mathrm{P} \Bigg(\!\mathcal{P}_{a_1}(a)+\!\sum_{j=2}^m\mathcal{P}_{a_j} \left (a_{|_{j-1}} \right ) \leq\!\!\!\!\sum_{k= a_{1} + 1}^{\infty}\!\!\!\! X(k)\!\Bigg).\end{equation}

Using Assumption 1(i) and a Chernoff bound, with $\lambda_{a_1}>0$ , we arrive at the upper bound:

\begin{equation*}\mathrm{P} \Bigg(\!\mathcal{P}_{a_1}(a)+\!\sum_{j=2}^m\mathcal{P}_{a_j} \left (a_{|_{j-1}} \right ) \leq Y_{a_1}\!\Bigg)\leq \mathcal{M} \left (\lambda_{a_1} \left (Y_{a_1} \right ) \right )\mathcal{L}_{\lambda_{a_1}} \left (\mathcal{P}_{a_1} \right )\prod_{j=2}^m \mathcal{L}_{\lambda_{a_1}} \left (\mathcal{P}_{a_j} \right ),\end{equation*}

where the last line follows from the fact that the sequence $(\mathcal{P}_{j}(u))_{j\in \mathbb{N}}$ is independent and distributed like $(\mathcal{P}_{j}(\varnothing))_{j\in \mathbb{N}}$ for any $u\in\mathcal{U}_\infty$ . When we sum over the possible conservative sequences a that are $a_1$ -conservative, each $a_{j}$ takes values between 1 and $a_1$ , for $j=2,\ldots, m$ . Thus,

(3.2) \begin{equation}\begin{aligned} \sum_{\substack{a:|a| = m\\ a\text{ $a_1$-conservative}}}\!\!\!\!\!\!\!\!\!\!\!{}\; \mathrm{P} & \,\!(a \text{ has at least $a_1$ children before } \varnothing \text{ explodes} )\\& \quad \quad \quad \leq{} \sum_{a_{2} = 1}^{a_{1}} \sum_{a_{3} = 1}^{a_{1}} \cdots \sum_{a_{m}=1}^{a_1} \mathcal{M} (\lambda_{a_1} (Y_{a_1} ) )\mathcal{L}_{\lambda_{a_1}} (\mathcal{P}_{a_1} )\prod_{j=2}^m \mathcal{L}_{\lambda_{a_1}} (\mathcal{P}_{a_j} )\\&\quad \quad \quad = {} \mathcal{M}(\lambda_{a_1}(Y_{a_1}))\mathcal{L}_{\lambda_{a_1}} (\mathcal{P}_{a_1}) \left(\sum_{n = 1}^{a_{1}} \mathcal{L}_{\lambda_{a_1}} \left (\mathcal{P}_{n} \right ) \right)^{m-1}.\end{aligned}\end{equation}

It thus remains to show that, for $a_1$ sufficiently large,

\[\sum_{n=1}^{a_1}\mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_n) < \zeta.\]

As $\lambda_n>0$ for all $n\in\mathbb{N}$ , we have $\mathcal{M}_{\lambda_n}(Y_n)\geq 1$ for all $n\in\mathbb{N}$ . Hence, by Assumption 1(ii),

\begin{equation*}\sum_{n=1}^\infty \mathcal{L}_{\lambda_n}(\mathcal{P}_n)<\infty\end{equation*}

also holds. As a result, for any $\eta>0$ there exists $N = N(\eta)\in\mathbb{N} $ such that, for all $a_1 > N$ ,

(3.3) \begin{equation} \sum_{n=N}^{a_1} \mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_n) < \sum_{n=N}^{\infty} \mathcal{L}_{\lambda_n}(\mathcal{P}_n) < \frac{\eta}{2},\end{equation}

where the inequality uses the fact that $\lambda_n$ is increasing in n. Conversely, since $\lambda_n$ diverges with n, bounded convergence (bounding the integrand by 1) yields

(3.4) \begin{equation}\lim_{a_1 \to \infty} \sum_{n = 1}^{N-1}\mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_n) =\sum_{n=1}^{N-1} \mathrm{P}\left(\mathcal{P}_n=0\right)=\sum_{n=1}^{N-1}\mathrm{P}(X(1)=\cdots =X(n)=0).\end{equation}

We note that, by Assumption 1(iii), there exists $\xi>0$ , such that

\begin{equation*}\mathrm{E}[\sup\{k\;:\;X(1)=\cdots= X(k)=0\}]=\sum_{n=1}^\infty \mathrm{P}(X(1)=\cdots =X(n)=0) <1-\xi.\end{equation*}

Hence, the right-hand side of (3.4) is at most $1-\xi$ for any $N\in\mathbb{N}$ . Thus, we first take $\eta<\xi$ and N large enough that we have the upper bound in (3.3). Then we take $K\geq N$ sufficiently large, so that, for all $a_1\geq K$ ,

\begin{equation*} \sum_{n = 1}^{N-1} \mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_{\ell}) < 1-\xi/2.\end{equation*}

Combined, we thus arrive at

\begin{equation*}\sum_{n=1}^{a_1}\mathcal{L}_{\lambda_{a_1}}(\mathcal{P}_{\ell})<1-\xi/2\;=\!:\;\zeta.\end{equation*}

Using this in (3.2), we conclude the proof.

This lemma provides an upper bound for the probability of the event that a vertex a explodes before the root of $\mathcal{U}_\infty$ , in the case that a is $a_1$ -conservative. When a does not satisfy this condition, we can view a as a concatenation of a number of conservative sequences. That is, we write $a=\overline b_1\cdots \overline b_\ell$ , where $\overline b_i=b_{i,1}\ldots b_{i,m_i}$ for each $i\in[\ell]$ and for some $\ell\in\mathbb{N}, (m_i)_{i\in[\ell]}\in \mathbb{N}^\ell$ , and $(b_{i,j})_{i\in[\ell],j\in[m_i]}$ , such that $\overline b_i$ is $b_{i,1}$ -conservative for each $i\in[\ell]$ . By the independence of birth processes of distinct individuals (or, in fact, the independence of disjoint subtrees), we are able to apply Lemma 1 to each conservative sequence in the concatenation to arrive at a bound for the expected number of individuals that explode before all their ancestors.

Proposition 1. Under Assumption 1, there exists $K'>0$ sufficiently large, such that

\begin{equation*}\mathrm{E}[| \{a\in \mathcal{U}_\infty\;:\; a_1>K',\ a\text{ explodes before all its ancestors} \} |]<\infty.\end{equation*}

Proof. As explained before the proposition statement, we think of sequences $a\in\mathcal{U}_\infty$ as a concatenation of conservative sequences. Let $a=a_1\ldots a_m$ be a sequence of length $m\in\mathbb{N}$ , and assume that there exist $k\in[m]$ and indices $1\;:\!=\;I_1<I_2<\ldots<I_k$ such that $a_1\;=\!:\;a_{I_1}<a_{I_2}<\ldots <a_{I_k}$ and $a_i\leq a_{I_j}$ for all $i\in \{I_j+1,\ldots, I_{j+1}-1\}$ and $j\in [k]$ (with $I_{k+1}\;:\!=\;m+1$ ). That is, the $I_j$ are the indices of the running maxima of a. We also set $I_{k+1} \;:\!=\; m + 1$ and $a_{0} = \varnothing$ . We think of a as a concatenation of the conservative sequences $a_{I_j}\cdots a_{I_{j+1}-1}$ , with $j\in[k]$ . These subsequences can be seen as corresponding to an $a_{I_j}$ -conservative individual, rooted at $a_{I_j-1}$ , for each $j\in[k]$ . We can thus apply Lemma 1 to these subsequences.

Since, by definition, we have $a_{I_{\ell +1}} > a_{I_{\ell}}$ , applying a similar logic to (3.1), we have the inclusion

\begin{equation*}\begin{aligned}E_a\;:\!=\;{}&\{a\text{ explodes}\text{ before any of its ancestors explodes}\}\\\subseteq{}&\bigcap_{\ell=1}^{k}\{a_1\cdots a_{I_{\ell+1}-1} \text{ gives birth to at least } a_{I_{\ell}}\text{ children before }a_1\cdots a_{I_{\ell}-1}\text{ explodes}\}\\ ={}& \bigcap_{\ell=1}^{k}\left \{ \mathcal{P}_{a_{I_{\ell}}} \left (a_{|_{I_{\ell+1}-1}} \right )+\sum_{j=I_{\ell}}^{I_{\ell+1}-2}\mathcal{P}_{a_{j+1}} \left (a_{|_j} \right ) \leq \sum_{i= a_{I_{\ell}} + 1}^{\infty} X \left (a_1 \cdots a_{I_{\ell}-1} i \right )\right \}\;=\!:\;\bigcap_{\ell=1}^k E_{a,\ell}.\end{aligned}\end{equation*}

Now, note that the events $(E_{a, \ell}, \ell \in [k])$ are not independent, since, for a given $\ell$ , the term $\mathcal{P}_{a_{I_{\ell}}}(a_{|_{I_{\ell+1}-1}})$ in $E_{a, \ell}$ may be correlated with the summands $ X(a_1 \cdots a_{I_{\ell+1}-1} i)$ appearing in $E_{a, \ell+1}$ (via the weight $W_{a_{I_{\ell+1}-1}}$ ). However, as we assume that, for any $u\in \mathcal{U}_\infty$ , the random variables $(X(uj))_{j\in\mathbb{N}}$ , conditionally on $W_u$ , are independent, it follows that these events are conditionally independent, given the weights of a and all its ancestors, $W_\varnothing, W_{a_1},W_{a_1a_2},\ldots, W_a$ . Thus,

\begin{equation*}\begin{aligned}\mathrm{P}{}&(E_a \, | \, W_\varnothing, W_{a_1},W_{a_1a_2},\ldots, W_a)\\ & \leq \prod_{\ell=1}^{k} \mathrm{P} \Bigg (\!\mathcal{P}_{a_{I_{\ell}}} \big(a_{|_{I_{\ell+1}-1}}\big)+\sum_{j=I_{\ell}}^{I_{\ell+1}-2}\mathcal{P}_{a_{j+1}} \big(a_{|_j}\big) \leq \!\!\!\!\sum_{i= a_{I_{\ell}} + 1}^{\infty}\!\!\!\! X \big(a_1 \cdots a_{I_{\ell}-1} i \big) \bigg | W_\varnothing, W_{a_1},W_{a_1a_2},\ldots, W_a\!\Bigg )\\ & \stackrel{(2.1)}{\leq} \prod_{\ell=1}^{k} \mathrm{P}\Bigg(\!\mathcal{P}_{a_{I_{\ell}}} \big(a_{|_{I_{\ell+1}-1}}\big)+\sum_{j=I_{\ell}}^{I_{\ell+1}-2}\mathcal{P}_{a_{j+1}} \big(a_{|_j}\big) \leq Y^{ (a_1 \cdots a_{I_{\ell}-1} )}_{a_{I_{\ell}}} \, \bigg | \, W_\varnothing, W_{a_1},W_{a_1a_2},\ldots, W_a \!\Bigg )\\ & = \prod_{\ell=1}^{k} \mathrm{P}(\widetilde E_{a, \ell} | W_\varnothing, W_{a_1},W_{a_1a_2},\ldots, W_a),\end{aligned}\end{equation*}

where each $Y^{(a_1 \cdots a_{I_{\ell}-1})}_{a_{I_{\ell}}}$ is independent and distributed like $Y_{a_{I_{\ell}}}$ and does depend on the vertex-weights. Now, each of the events $ \widetilde E_{a, \ell}$ is independent, as they each depend on different weights. Hence, so is each of the terms appearing in the product, so that taking expectations on both sides yields

(3.5) \begin{equation} \mathrm{P}\left(E_a\right)\leq \prod_{\ell=1}^k \mathrm{P}\left(\widetilde E_{a,\ell}\right).\end{equation}

We now let $d_j\;:\!=\;I_{j+1}-I_j-1$ for $j\in[k-1]$ and $d_k\;:\!=\;m-I_k$ denote the number of entries between the running maxima of a. We can then define, for $(d_j)_{j\in[k]}\in\mathbb{N}_0^k$ (with $[0]\;:\!=\;\emptyset$ ),

\begin{multline*}\mathscr P_k({} a_{I_1},a_{I_2},\ldots, a_{I_k}, d_1, \ldots, d_k)\\\;:\!=\;\{a\in \mathcal{U}_\infty\;:\; \text{ For all }j\in\{1,\ldots, k\}\text{ and all }i\in[d_j],\ a_{I_j+i}\in[a_{I_j}]\}\end{multline*}

as the set of all sequences a with running maxima $a_1=a_{I_1},\ldots, a_{I_k}$ , and $d_j$ many entries between the $j{\text{th}}$ and $(j+1){\text{th}}$ maximum. For brevity, we omit the arguments of $\mathscr P_k$ . We then write the expected value in the proposition statement as

(3.6) \begin{equation}\begin{aligned} \sum_{\substack{ a\in \mathcal{U}_\infty \\ a_1>K'}}\mathrm{P}(E_a)=\sum_{m=1}^\infty \sum_{\substack{a\;:\; |a|=m\\ a_1>K'}}\mathrm{P}\left(E_a\right)\leq \sum_{m=1}^\infty \sum_{k=1}^m \sum_{a_{I_k}>\ldots >a_{I_1}>K'}\sum_{\substack{(d_\ell)_{\ell\in[k]}\in \mathbb{N}_0^k\\ \sum_{\ell=1}^k d_\ell=m-k}}\sum_{a\in \mathscr P_k}\prod_{\ell=1}^k\mathrm{P}(\widetilde E_{a,\ell}).\end{aligned}\end{equation}

In the first step, we introduce a sum over all sequence lengths m. In the second step, we furthermore sum over the number of running maxima k, the values of the running maxima $a_{I_1}, \ldots a_{I_k}$ , the number of entries $d_\ell$ between each maxima $I_\ell$ and $I_{\ell+1}$ (or between $I_k$ and m if $\ell=m$ ), and all sequences $a\in\mathscr P_k$ that admit such running maxima and inter-maxima lengths. Moreover, we use (3.5) to bound $\mathrm{P}\left(E_a\right)$ from above, now that we know the number of running maxima in a.

We can now take the sum over $a\in\mathscr P_k$ into the product, because we can decompose each sequence $a\in\mathscr P_k(a_{I_1},\ldots, a_{I_k},d_1,\ldots d_k)$ into a concatenation of sequences $a^{(1)}\ldots a^{(k)}$ , with $a^{(\ell)}\;:\!=\;a_{I_\ell}\cdots a_{I_{\ell+1}-1}\in\mathscr P_1(a_{I_\ell},d_\ell)$ for each $\ell\in[k]$ . This yields

\begin{equation*}\sum_{a\in\mathscr P_k}\prod_{\ell=1}^k \mathrm{P}(\widetilde E_{a,\ell})=\prod_{\ell=1}^k\!\!\!\! \sum_{\substack{a^{(\ell)}:|a^{(\ell)}|=d_\ell+1\\ \text{$a^{(\ell)}$ $a_{I_\ell}$-conservative}}}\!\!\!\!\!\!\!\!\!\!\!\mathrm{P}(\widetilde E_{a,\ell}).\end{equation*}

We can then directly apply Lemma 1 to each of the sums in the product. Indeed, as $a_{I_\ell}>a_{I_1}=a_1>K'$ , we can take K large enough to obtain, for some $\zeta<1$ , the upper bound

\begin{equation*}\prod_{\ell=1}^k \!\!\!\!\sum_{\substack{a^{(\ell)}:|a^{(\ell)}|=d_\ell+1\\ \text{$a^{(\ell)}$ $a_{I_\ell}$-conservative}}}\!\!\!\!\!\!\!\!\!\!\!\mathrm{P}(\widetilde E_{a,\ell})\leq \prod_{\ell=1}^k \zeta^{d_\ell}\mathcal{M}_{\lambda_{a_{I_\ell}}} \big(Y_{a_{I_\ell}} \big) \mathcal{L}_{\lambda_{a_{I_\ell}}} \big(\mathcal{P}_{a_{I_\ell}} \big).\end{equation*}

We can take out the factors $\zeta^{d_\ell}$ and use the fact that the $d_\ell$ sum to $m-k$ . Using this in (3.6) yields

\begin{equation*}\begin{aligned}\sum_{m=1}^\infty{} \sum_{k=1}^m & \sum_{a_{I_k}>\ldots >a_{I_1}>K'}\zeta^{m-k} \sum_{\substack{(d_j)_{j\in[k]}\in \mathbb{N}_0^k\\ \sum_{\ell=1}^k d_\ell=m-k}}\prod_{\ell=1}^k \mathcal{M}_{\lambda_{a_{I_\ell}}} \big(Y_{a_{I_\ell}}\big) \mathcal{L}_{\lambda_{a_{I_\ell}}} \big(\mathcal{P}_{a_{I_\ell}}\big)\\&\leq \sum_{m=1}^\infty \sum_{k=1}^m \binom{m-1}{k-1}\zeta^{m-k}\sum_{a_{I_1}>K'}\cdots \sum_{a_{I_k}>K'}\prod_{\ell=1}^k \mathcal{M}_{\lambda_{a_{I_\ell}}} \big(Y_{a_{I_\ell}}\big) \mathcal{L}_{\lambda_{a_{I_\ell}}} \big(\mathcal{P}_{a_{I_\ell}}\big)\\&=\sum_{m=1}^\infty \sum_{k=1}^m \binom{m-1}{k-1}\zeta^{m-k}\bigg(\sum_{n> K'} \mathcal{M}_{\lambda_n}(Y_n)\mathcal{L}_{\lambda_n}(\mathcal{P}_n)\bigg)^k.\end{aligned}\end{equation*}

By Assumption 1(ii), we can bound the innermost sum from above by $\zeta/M$ for some $M>0$ when K is sufficiently large, so that we obtain the upper bound

\begin{equation*}\sum_{m=1}^\infty \sum_{k=1}^m \binom{m-1}{k-1}\zeta^mM^{-k}=\frac{\zeta}{M}\sum_{m=1}^\infty \Big(\zeta\big(1+\tfrac1M\big)\Big)^{m-1}<\infty,\end{equation*}

where the last step follows when M is large enough that $\zeta(1+1/M)<1$ , which holds by choosing K sufficiently large, as follows from (3.3).

We then state three results, proved by Iyer and the author in [Reference Iyer and Lodewijks11]. First, we introduce, for $L\in \mathbb{N}$ ,

\[\mathcal{U}_{L} \;:\!=\; \left\{u \in \mathcal{U}_{\infty}\;:\; u_{i} \leq L \text{ for all } i \in [|u|] \right\} \cup \{\varnothing\}.\]

Following [Reference Oliveira and Spencer16], we say that elements of $\mathcal{U}_{L}$ are L-moderate. We then have the following result, which tells us that the subtree consisting of L-moderate individuals does not explode, almost surely.

Proposition 2 ([Reference Iyer and Lodewijks11].) Let $(\mathscr{T}_{t})_{t \geq 0}$ be a (X,W)-CMJ branching process satisfying Condition (iii) of Assumption 1. Then, almost surely, for all $t \in (0,\infty)$ , we have $|\mathscr{T}_{t}\cap \mathcal{U}_L| < \infty$ .

Recall that, for an (X, W)-CMJ branching process $(\mathscr{T}_{t})_{t \geq 0}$ , we have

\[\tau_{\infty} = \lim_{k \to \infty} \tau_{k} = \inf\left\{t > 0\;:\; |\mathscr{T}_t| = \infty\right\}.\]

Recall also that we have $\mathcal{T}_{\infty} = \bigcup_{k=1}^{\infty} \mathcal{T}_{k} = \bigcup_{k=1}^{\infty} \mathscr{T}_{\tau_{k}}$ . We then have the following result.

Lemma 2 ([Reference Iyer and Lodewijks11].) Let $(\mathscr{T}_{t})_{t \geq 0}$ be a (X,W)-CMJ branching process satisfying Condition (iii) of Assumption 1. Then, almost surely, $\mathcal{T}_{\infty} = \left\{u \in \mathcal{U}_{\infty}\;:\; \mathcal{B}(u) < \tau_{\infty}\right\} \subseteq \mathscr{T}_{\tau_{\infty}}$ .

Finally, we relate $\tau_\infty$ to the explosion times of all individuals in $\mathcal{U}_\infty$ .

Lemma 3 ([Reference Iyer and Lodewijks11].) Let $(\mathscr{T}_{t})_{t \geq 0}$ be a (X,W)-CMJ branching process that satisfies Assumption 1. Almost surely, $\tau_{\infty} = \inf_{u \in \mathcal{U}_{\infty}}\{\mathcal{B}(u) + \mathcal{P}(u)\}$ .

Combining these three lemmas with Proposition 1, we are ready to prove Theorem 1. Although the proof is identical to the proof of Theorem $2.5$ in [Reference Iyer and Lodewijks11], as it uses these three results and a weaker version of Proposition 1, we include it for the article to be self-contained.

Proof of Theorem 1. Fix K as in Proposition 1. We may view any $w \in \mathcal{U}_{\infty}$ as a concatenation $w = uv$ , where $u \in \mathcal{U}_{K'}$ is K -moderate, and $v = v_1 \cdots v_{k}$ , where $v_{1} > K'$ (here we also allow v to be empty, so that K -moderate nodes w may also be interpreted as a concatenation). Now, note that (on $\mathcal{B}(u) < \infty$ ) the birth times $\mathcal{B}(uv) - \mathcal{B}(u) \sim \mathcal{B}(v)$ , and thus, by arguments analogous to those appearing in Proposition 1, for any $u \in \mathcal{U}_{\infty}$ (in particular for $u \in \mathcal{U}_{K'}$ ),

(3.7) \begin{equation} \mathrm{E}\left[\left| \left\{a = a_1 \cdots a_{m} \in \mathcal{U}_\infty\;:\; a_1>K', u a\text{ explodes before } ua_{|_{m-1}}, ua_{|_{m-2}}, \ldots, u\right\}\right| \right]<\infty.\end{equation}

Now, since $\tau_{\infty} < \infty$ almost surely, we infer from Proposition 2 with $L = K'$ , that $|\{u \in \mathcal{U}_{K'}\;:\; \mathcal{B}(u) \leq \tau_{\infty}\}| < \infty$ almost surely. Therefore, by (3.7), the set

\begin{equation*} E_{\mathrm{expl}}\;:\!=\;\left\{u \in \mathcal{U}_\infty\;:\; \mathcal{B}(u) \leq \tau_{\infty}, u \text{ explodes before all of its ancestors} \right\}\end{equation*}

is finite almost surely. By the definition of $E_{\mathrm{expl}}$ and the fact that the infimum of a finite set is attained by (at least) one of its elements, Lemma 3 implies that, almost surely,

\[\exists u^{*}\in E_{\mathrm{expl}}\;:\; \mathcal{B}(u^{*}) + \mathcal{P}(u^{*}) = \inf_{v \in E_{\mathrm{expl}}} \{\mathcal{B}(v) + \mathcal{P}(v)\} = \inf_{u \in \mathcal{U}_\infty} \{\mathcal{B}(u) + \mathcal{P}(u)\} = \tau_{\infty}.\]

This implies that $u^{*}$ has infinite degree in $\mathscr{T}_{\tau_{\infty}}$ . By Condition (iii) of Assumption 1, we have $\sum_{i=n+1}^{\infty} X(u^{*}i) > 0$ almost surely, so that $\mathcal{B}(u^{*}i) < \tau_{\infty}$ for each $i\in\mathbb{N}$ , almost surely. Therefore, by Lemma 2, $u^{*}$ has infinite degree in $\mathcal{T}_{\infty}$ as well, which yields the desired result.

3.2. Proof of Theorem 3

Proof of Theorem 3. The desired result follows by verifying the conditions on $ \lambda_n$ and the vertex-weights in Corollary 1. We start by constructing a sequence $(\lambda_n)_{n\in\mathbb{N}}$ such that there exists $N\in\mathbb{N}$ for which we have $\lambda_n<f(i,w)$ , for all $i\geq n\geq N$ . We recall that there exist $\epsilon>0$ , $n_0\in\mathbb{N}$ , and a sequence $(k_n)_{n\in\mathbb{N}}$ such that $\mathrm{P}\left(g(W)>k_n\right)\leq n^{-(1+\epsilon)}$ , for all $n\geq n_0$ . Then, let $\delta\in(0,\epsilon)$ , as in the statement of Theorem 3, recall $\mu_n$ from (2.2), and set

(3.8) \begin{equation}\lambda_n\;:\!=\;\delta\log(n)\mu_n^{-1}.\end{equation}

It is clear that $\mu_n$ is decreasing in n and tends to zero by (2.5), so that $\lambda_n$ is increasing and diverges. Then, by Assumption 2, we fix $\eta>0$ small and take N large so that, for all $n\geq N$ ,

(3.9) \begin{equation} \mu_n^{-1}=\Bigg(\sum_{j=n+1}^\infty \frac{1}{g(w^*)s(j)}\Bigg)^{-1}\leq g(w^*)\Bigg(\sum_{j=n+1}^\infty \frac{1}{\eta j^p}\Bigg)^{-1}=(\eta g(w^*)(p-1)+o(1)) n^{p-1}.\end{equation}

At the same time, again for N large and all $n\geq N$ , we have $s(n)\geq \eta^{-1} n^\beta$ , so that

\begin{align}\lambda_n=\delta \log(n)\mu_n^{-1} & \leq (\eta \delta g(w^*)(p-1)+o(1) )\log(n) n^{p-1} \\& \leq (\eta^2 \delta g(w^*)(p-1)+o(1) )\frac{\log(n)}{n^{(1+\beta)-p}} s(n).\end{align}

The final condition of Assumption 2 then implies that

\begin{equation*}\lambda_n\leq (\eta^2 \delta C g(w^*)(p-1)+o(1) ) \inf_{i\geq n}s(i),\end{equation*}

for all $n\geq N$ and some large $N\in\mathbb{N}$ . As we can choose $\eta$ arbitrarily small, we can make the constant on the right-hand side arbitrarily small, so that

\begin{equation*}\lambda_n=o(\inf_{i\geq n}s(i)),\end{equation*}

as desired.

We then prove (2.3). By the choice of f and assuming (without loss of generality) that $g(w^*)=1$ , we have that (2.3) is equivalent to

(3.10) \begin{equation} \sum_{n=N}^\infty \prod_{i=n}^\infty \bigg(\frac{s(i)}{s(i)-\lambda_n}\bigg)\mathrm{E}\Bigg[\prod_{i=0}^{n-1}\frac{g(W)s(i)}{g(W)s(i)+\lambda_n}\Bigg]<\infty.\end{equation}

By using that $1-x\leq \mathrm{e}^{-x}$ for all $x\in \mathbb R$ , we can bound each term in the sum from above by

(3.11) \begin{multline} \exp{}\Bigg(\lambda_n \sum_{i=n}^\infty \frac{1}{s(i)-\lambda_n}-\lambda_n \sum_{i=0}^{n-1} \frac{1}{k_ns(i)+\lambda_n}\Bigg)+\exp\Bigg(\lambda_n \sum_{i=n}^\infty \frac{1}{s(i)-\lambda_n}\Bigg)\mathrm{P}\left(g(W)>k_n\right)\\\leq{} \exp\Bigg(\lambda_n\sum_{i=n}^\infty \frac{1}{s(i)-\lambda_n}- \frac{\lambda_n}{k_n}\sum_{i=0}^{n-1}\frac{1}{s(i)+\lambda_n /k_n}\Bigg)+\exp\Bigg(\lambda_n\sum_{i=n}^\infty\frac{1}{s(i)-\lambda_n}\Bigg)\frac{1}{n^{1+\epsilon}},\end{multline}

where we take $N\geq n_0$ and use the fact that $\mathrm{P}\left(g(W)>k_n\right)\leq n^{-(1+\epsilon)}$ for all $n\geq N$ to arrive at the upper bound. Using $\lambda_n=o(\inf_{i\geq n}s(i))$ and $g(w^*)=1$ ,

\begin{equation*}\sum_{i=n}^\infty \frac{1}{s(i)-\lambda_n}=(1+o(1))\mu_n.\end{equation*}

Then, by the choice of $\lambda_n$ in (3.8), we can write (3.11) as

(3.12) \begin{equation} \exp\Bigg((\delta+o(1))\log(n)- \frac{\lambda_n}{k_n}\sum_{i=0}^{n-1}\frac{1}{s(i)+\lambda_n /k_n}\Bigg)+n^{-(1+\epsilon-\delta+o(1))}.\end{equation}

Since $\delta<\epsilon$ , the second term is summable in n. It thus remains to bound the first term.

We recall that $p\in(1,1+\beta)$ and that $k_n\geq g(w^*)=1$ for all n. As a result, for some constant $C_\beta>0$ and using (3.9),

\begin{equation*}\left(\frac{\lambda_n}{k_n}\right)^{1/\beta}\leq \lambda_n^{1/\beta}\leq C_\beta (\!\log n)^{1/\beta} n^{(p-1)/\beta}=o(n).\end{equation*}

We then observe that $\lambda_n/k_n$ tends to infinity with n. Indeed, as $\sup_{x\geq 0}\mu_x$ is finite (where we define $\mu_x$ for non-integer x by linear interpolation), for all n large,

\begin{equation*}\frac{\lambda_n}{k_n}=\delta \frac{\log n}{\mu_n k_n}\geq \frac{\mu_{(\delta \log (n)/(\mu_nk_n))^{1/\beta}}}{\mu_nk_n},\end{equation*}

and the right-hand side tends to infinity with n by the condition in (2.8). Further, Assumption 2 yields that $s(i)\geq M i^\beta$ for any M large and all $i\geq I=I(M)\in\mathbb{N}$ . Combined, we obtain, for all n large, that

\begin{equation*}s(i)\geq Mi^\beta \geq M\frac{\lambda_n}{k_n}, \quad \text{for all } i\geq \left(\frac{\lambda_n}{k_n}\right)^{1/\beta}.\end{equation*}

As a result,

\begin{equation*}\sum_{i=0}^{n-1}\frac{1}{s(i)+\lambda_n /k_n}\geq \sum_{i=(\lambda_n/k_n)^{1/\beta}}^{n-1}\frac{1}{s(i)+\lambda_n/k_n}=\frac{\mu_{(\lambda_n/k_n)^{1/\beta}}-\mu_n}{1+1/M}.\end{equation*}

Since $\lambda_n=\delta \log(n)\mu_n^{-1}$ , we use this in the first term in (3.12) to obtain

\begin{equation*}\exp\left(\left(\delta\frac{2M+1}{M+1}+o(1)\right)\log(n)- \delta\frac{M}{M+1}\log(n)\frac{\mu_{(\lambda_n/k_n)^{1/\beta}}}{\mu_nk_n}\right).\end{equation*}

By the assumption in (2.6), it follows that we can bound the entire term from above by $n^{-C}$ for any $C>0$ , since the ratio $\mu_{(\lambda_n/k_n)^{1/\beta}}/(\mu_nk_n)$ can be bounded from below by a sufficiently large constant for all n large by (2.8). This yields (3.10) and concludes the proof.

4. Examples

In this section, we discuss the examples that Theorem 4 deals with. The proof comes down to verifying the conditions in Theorem 3. We recall that we set $g(w)=w+1$ . In each case, we asymptotically determine $\mu_n$ , provide $k_n$ , and show that the required assumptions are satisfied.

Proof of Theorem 4. $\;$

Infinite star. To prove there exists a unique infinite star almost surely, we verify the conditions in Theorem 3.

Case (i). It is clear that Assumption 2 is satisfied with $\beta=1$ and any $p\in(1,2)$ . By switching from summation to integration:

(4.1) \begin{equation} \mu_n=\sum_{i=n}^\infty \frac{1}{s(i)}=\int_n^\infty\frac{1+o(1)}{x(\!\log x)^\sigma}\,\mathrm d x=\int_{\log n}^\infty \frac{1+o(1)}{y^\sigma}\, \mathrm d y=\big((\sigma-1)^{-1}+o(1)\big)(\!\log n)^{-(\sigma-1)}.\end{equation}

We set $k_n\;:\!=\;((1+\epsilon)\log n)^{1/\kappa}$ with $\epsilon>0$ , and take $\delta\in(0,\epsilon)$ . There exists $C_1>0$ such that

(4.2) \begin{equation} \mathrm{P}\left(g(W)>k_n\right)\leq C_1\mathrm{e}^{-k_n^\kappa}=C_1n^{-(1+\epsilon)},\end{equation}

which is summable. Then there exists $C_2>0$ such that

\begin{equation*}\frac{\mu_{\delta \log(n)/(\mu_nk_n)}}{\mu_nk_n}= \left (C_2+o(1) \right )\frac{(\!\log n)^{\sigma-1}(\!\log\log n)^{-(\sigma-1)}}{(\!\log n)^{1/\kappa}}=(\!\log n)^{(\sigma-1)-1/\kappa-o(1)}.\end{equation*}

When $(\sigma-1)\kappa>1$ , this quantity diverges with n, so that the condition in Theorem 3 holds.

Case (ii). It is clear that Assumption 2 is satisfied with $\beta=1$ and any $p\in(1,2)$ . By switching from summation to integration and substituting $y=(\!\log\log x)^\nu$ :

(4.3) \begin{equation} \mu_n=\sum_{i=n}^\infty \frac{1}{s(i)}=\int_n^\infty \!\!\!\frac{1+o(1)}{x\log x\exp((\!\log\log x)^\nu)}\,\mathrm d x= \left (1+o(1) \right )\int_{(\!\log\log n)^\nu}^\infty \!\!\!\!\!\!\!\!\!y^{1-1/\nu}\mathrm{e}^{-y}\mathrm d y.\end{equation}

By using the incomplete gamma function, we arrive at $\mu_n=\exp(-(1+o(1))(\!\log\log n)^\nu)$ . We set $k_n\;:\!=\;\exp((\!\log((1+\epsilon)\log n))^{1/\gamma})$ with $\epsilon>0$ , and take $\delta\in(0,\epsilon)$ . There exists $C_1>0$ such that

(4.4) \begin{equation} \mathrm{P}\left(g(W)>k_n\right)\leq C_1\exp\!\big(-\mathrm{e}^{(\!\log k_n)^\gamma}\big)=C_1n^{-(1+\epsilon)},\end{equation}

which is summable. Then

\begin{equation*}\frac{\mu_{\delta \log(n)/(\mu_nk_n)}}{\mu_nk_n}=\exp\Big(\big[(\!\log\log n)^\nu-(\!\log\log n)^{1/\gamma}-\big(1-\tfrac1\gamma\big)^\nu (\!\log\log\log n)^\nu\big](1+o(1))\Big).\end{equation*}

When $\nu\gamma>1$ , this quantity diverges with n, so that the condition in Theorem 3 holds.

Case (iii). It is clear that Assumption 2 is satisfied with $\beta=1$ and any $p\in(1,2)$ . By switching from summation to integration and with similar steps as in (4.1):

(4.5) \begin{equation} \mu_n=\sum_{i=n}^\infty \frac{1}{s(i)}=\int_n^\infty \frac{1+o(1)}{x\log(x)(\!\log\log x)^\sigma}\,\mathrm d x=\big((\sigma-1)^{-1}+o(1)\big)(\!\log\log n)^{-(\sigma-1)}.\end{equation}

We set $k_n\;:\!=\;(\!\log((1+\epsilon)\log n))^{1/\kappa}$ with $\epsilon>0$ , and take $\delta\in(0,\epsilon)$ . There exists $C_1>0$ such that

(4.6) \begin{equation} \mathrm{P}\left(g(W)>k_n\right)\leq C_1\exp(-\mathrm{e}^{k_n^\kappa})=C_1n^{-(1+\epsilon)},\end{equation}

which is summable. Then there exists $C_2>0$ such that

\begin{equation*}\frac{\mu_{\delta \log(n)/(\mu_nk_n)}}{\mu_nk_n}= \left (C_2+o(1) \right )\frac{(\!\log \log n)^{\sigma-1}(\!\log\log\log n)^{-(\sigma-1)}}{(\!\log\log n)^{1/\kappa}}=(\!\log \log n)^{(\sigma-1)-1/\kappa-o(1)}.\end{equation*}

When $(\sigma-1)\kappa>1$ , this quantity diverges with n, so that the condition in Theorem 3 holds.

Case (iv). It is clear that the first part of Assumption 2 is satisfied for any $\beta<\alpha$ and any $p\in(1,1+\beta)$ . Since, for large n, the smallest value s(i) among all $i\geq n$ is attained at $i=\lceil \sqrt n\rceil^2$ , with value $s(i)=\lceil \sqrt n\rceil^{2\alpha}=(1+o(1))n^\alpha$ , it follows that the second part of Assumption 2 is also satisfied, since $\alpha\in(\tfrac12,1]$ and we can thus choose p close enough to 1 and $\beta$ close enough to $\alpha$ so that $1+\beta-p>\tfrac12>1-\alpha$ . Hence, for all n large,

\begin{equation*}s(n)\leq (n+1)(\!\log(n+2))^\sigma \leq \frac{n^{1+\beta-p}}{\log n} n^\alpha=(1+o(1))\frac{n^{1+\beta-p}}{\log n} \inf_{i\geq n}s(i).\end{equation*}

The last part of Assumption (2) is thus satisfied with $C>1$ and N sufficiently large. Then

(4.7) \begin{equation} \mu_n =\sum_{i=n}^\infty \frac{1}{s(i)}=\sum_{\substack{i=n\\ \sqrt i\in\mathbb{N}}}^\infty \frac{1}{i^\alpha}+\sum_{\substack{i=n\\ \sqrt i\not\in\mathbb{N}}}^\infty \frac{1}{(i+1)(\!\log(i+2))^\sigma}.\end{equation}

As $i^\alpha\leq (i+1)(\!\log(i+2))^\sigma$ for all $i\geq n$ when n is large, since $\alpha\leq 1$ , we can use (4.1) to obtain the lower bound:

\begin{equation*}\mu_n\geq \big((\sigma-1)^{-1}+o(1)\big)(\!\log n)^{-(\sigma-1)}.\end{equation*}

For an upper bound, we include all $i\geq n$ in the second sum in (4.7) and again use (4.1). The first sum on the right-hand side of (4.7) is then bounded from above by

\begin{equation*}\sum_{\substack{i=n\\ \sqrt i\in\mathbb{N}}}\frac{1}{i^\alpha}=\sum_{i=\lceil \sqrt n\rceil}^\infty i^{-2\alpha}= \frac{1+o(1)}{2\alpha -1}n^{1/2-\alpha}.\end{equation*}

As a result, the upper bound matches the lower bound, regardless of the value of $\alpha\in(\tfrac12,1]$ , to obtain

(4.8) \begin{equation} \mu_n=((\sigma-1)^{-1}+o(1))(\!\log n)^{-(\sigma-1)}.\end{equation}

The remainder of the computations then follow the same approach as Case (i).

Infinite path. To prove that there exists a unique infinite path, we use [Reference Iyer and Lodewijks11, Lemma $6.4$ ] to verify the conditions in Theorem 2. That is, if, for any $w\geq 0$ ,

\begin{equation*}\limsup_{n\to\infty}\frac{1}{\mu_n^w}\sum_{i=0}^\infty \frac{1}{f(i,k_n)}<1,\end{equation*}

where $k_n$ is a sequence such that $\mathrm{P}\left(g(W)>k_n\right)$ is not summable, then $T_\infty$ contains a unique infinite path and no node of infinite degree, almost surely. As $f(i,x)=g(x)s(i)=(x+1)s(i)$ , it follows from (2.5) that it suffices to show that $\mu_n k_n$ diverges.

Case (i). With $\mu_n$ as in (4.1) and $k_n\;:\!=\;((1-\epsilon)\log n)^{1/\kappa}$ , it follows that $\mathrm{P}\left(g(W)>k_n\right)$ is not summable when the inequality in (4.2) is reversed (and using a constant $C_1'<C_1$ ). We then conclude that $\mu_nk_n$ diverges when $(\sigma-1)\kappa<1$ .

Case (ii). With $\mu_n$ as in (4.3) and $k_n\;:\!=\;\exp((\!\log(1-\epsilon)\log n))^{1/\gamma})$ , it follows that $\mathrm{P}\left(g(W)>k_n\right)$ is not summable when the inequality in (4.4) is reversed (and using a constant $C_1'<C_1$ ). We then conclude that $\mu_nk_n$ diverges when $\nu \gamma <1$ .

Case (iii). With $\mu_n$ as in (4.5) and $k_n\;:\!=\;(\!\log((1-\epsilon)\log n))^{1/\kappa}$ , it follows that $\mathrm{P}\left(g(W)>k_n\right)$ is not summable when the inequality in (4.6) is reversed (and using a constant $C_1'<C_1$ ). We then conclude that $\mu_nk_n$ diverges when $(\sigma-1)\kappa<1$ .

Case (iv). With $\mu_n$ as in (4.8), we can follow the same steps as in Case (i) to arrive at the desired result.

Acknowledgements

The author thanks Thomas Gottfried for some useful discussions, as well as Tejas Iyer for the stimulating collaboration on work that preceded this paper and useful discussions.

Funding information

The author has received funding from the European Union’s Horizon 2022 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 101108569.

Competing interests

There were no competing interests, arising during the preparation or publication process of this article, to declare.

References

Amini, O., Devroye, L., Griffiths, S. and Olver, N. (2013). On explosions in heavy-tailed branching random walks. Ann. Probab. 41, 18641899.CrossRefGoogle Scholar
Athreya, K. B. and Karlin, S. (1968). Embedding of urn schemes into continuous time Markov branching processes and related limit theorems. Ann. Math. Stat. 39, 18011817.CrossRefGoogle Scholar
Athreya, K. B. and Ney, P. E. (1972). Branching Processes (Die Grundlehren der mathematischen Wissenschaften 196). Springer, New York.CrossRefGoogle Scholar
Crump, K. S. and Mode, C. J. (1968). A general age-dependent branching process. I. J. Math. Anal. Appl. 24, 494508.CrossRefGoogle Scholar
Crump, K. S. and Mode, C. J. (1968). A general age-dependent branching process. II. J. Math. Anal. Appl. 25, 817.CrossRefGoogle Scholar
Grey, D. (1974). Explosiveness of age-dependent branching processes. Z. Wahrsch. Verw. Gebiete 28, 129137.CrossRefGoogle Scholar
Grishechkin, S. (1987). On the regularity of branching processes with several types of particles. Theory Probab. Appl. 31, 233243.CrossRefGoogle Scholar
Iyer, T. (2023). Degree distributions in recursive trees with fitnesses. Adv. Appl. Probab. 55, 407443.CrossRefGoogle Scholar
Iyer, T. (2024). On a sufficient condition for explosion in CMJ branching processes and applications to recursive trees. Electron. Commun. Probab. 29, 112.CrossRefGoogle Scholar
Iyer, T. (2024). Persistent hubs in CMJ branching processes with independent increments and preferential attachment trees. arXiv preprint arXiv:2410.24170.Google Scholar
Iyer, T. and Lodewijks, B. (2023). On the structure of genealogical trees associated with explosive Crump–Mode–Jagers branching processes. arXiv preprint arXiv:2311.14664.Google Scholar
Jagers, P. (1969). A general stochastic model for population development. Scand. Actuarial J. 1969, 84103.CrossRefGoogle Scholar
Jagers, P. (1975). Branching Processes with Biological Applications (Wiley Series in Probability and Mathematical Statistics—Applied Probability and Statistics). Wiley-Interscience, London.Google Scholar
Komjáthy, J. (2016). Explosive Crump–Mode–Jagers branching processes. arXiv preprint arXiv:1602.01657.Google Scholar
Last, G. and Penrose, M. (2018). Lectures on the Poisson Process (Institute of Mathematical Statistics Textbooks 7). Cambridge University Press.Google Scholar
Oliveira, R. and Spencer, J. (2005). Connectivity transitions in networks with super-linear preferential attachment. Internet Math. 2, 121163.CrossRefGoogle Scholar
Sagitov, S. (2017). Tail generating functions for extendable branching processes. Stochastic Processes Appl. 127, 16491675.CrossRefGoogle Scholar
Sagitov, S. and Lindo, A. (2016). A special family of Galton–Watson processes with explosions. In Branching Processes and Their Applications, eds I. del Puerto, Springer, Cham, pp. 237–254.Google Scholar
Sevast’yanov, B. A. (1967). On the regularity of branching processes. Mathematical Notes of the Academy of Sciences of the USSR 1, 3440.Google Scholar
Sevast’yanov, B. A. (1970). Necessary condition for the regularity of branching processes. Mathematical Notes of the Academy of Sciences of the USSR 7, 234238.Google Scholar
Vatutin, V. (1987). Sufficient conditions for regularity of Bellman–Harris branching processes. Theory Probab. Appl. 31, 5057.CrossRefGoogle Scholar