Skip to main content Accessibility help
×
Home
Hostname: page-component-684899dbb8-c97xr Total loading time: 2.391 Render date: 2022-05-24T00:17:13.693Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true }

The distance profile of rooted and unrooted simply generated trees

Published online by Cambridge University Press:  18 August 2021

Gabriel Berzunza Ojeda*
Affiliation:
Department of Mathematical Sciences, University of Liverpool, Mathematical Sciences Building, Liverpool L69 7ZL, UK
Svante Janson
Affiliation:
Department of Mathematics, Uppsala University, PO Box 480, Uppsala SE-751 06, Sweden
*
*Corresponding author. Email: gabriel.berzunza-ojeda@liverpool.ac.uk
Rights & Permissions[Opens in a new window]

Abstract

It is well known that the height profile of a critical conditioned Galton–Watson tree with finite offspring variance converges, after a suitable normalisation, to the local time of a standard Brownian excursion. In this work, we study the distance profile, defined as the profile of all distances between pairs of vertices. We show that after a proper rescaling the distance profile converges to a continuous random function that can be described as the density of distances between random points in the Brownian continuum random tree. We show that this limiting function a.s. is Hölder continuous of any order $\alpha<1$ , and that it is a.e. differentiable. We note that it cannot be differentiable at 0, but leave as open questions whether it is Lipschitz, and whether it is continuously differentiable on the half-line $(0,\infty)$ . The distance profile is naturally defined also for unrooted trees contrary to the height profile that is designed for rooted trees. This is used in our proof, and we prove the corresponding convergence result for the distance profile of random unrooted simply generated trees. As a minor purpose of the present work, we also formalize the notion of unrooted simply generated trees and include some simple results relating them to rooted simply generated trees, which might be of independent interest.

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1. Introduction

Consider a random simply generated tree. (For definitions of this and other concepts in the introduction, see Sections 23.) Under some technical conditions, amounting to the tree being equivalent to a critical conditioned Galton–Watson tree with finite offspring variance, the (height) profile of the tree converges in distribution, as a random function in $C[0,\infty)$ . Moreover, the limiting random function can be identified with the local time of a standard Brownian excursion; this was conjectured by Aldous [Reference Aldous3] and proved by Drmota and Gittenberger [Reference Drmota and Gittenberger18] (under a stronger assumption), see also Drmota [Reference Drmota17, Section 4.2], and in general by Kersting [Reference Kersting34] in a paper that unfortunately remains unpublished. See further Pitman [Reference Pitman47] for related results and a proof in a special case. See also Kersting [Reference Kersting34] for extensions when the offspring variance is infinite, a case not considered in the present paper.

Remark 1.1. To be precise, [Reference Drmota17] and [Reference Drmota and Gittenberger18] assume that the offspring distribution for the conditioned Galton–Watson tree has a finite exponential moment. As said in [Reference Drmota17, footnote on page 127], the analysis can be extended, but it seems that the proof of tightness in [Reference Drmota17], which is based on estimating fourth moments, requires a finite fourth moment of the offspring distribution.

Note also that Drmota [Reference Drmota17, ‘a shortcut’ pp. 123–125] besides the proof from [Reference Drmota and Gittenberger18] also gives an alternative proof that combines tightness (taken from the first proof) and the convergence of the contour process to a Brownian excursion shown by Aldous [Reference Aldous4], and thus avoids some intricate calculations in the first proof. We will use this method of proof below.

Using notation introduced below, the result can be stated as follows.

Theorem 1.2. (Drmota and Gittenberger [Reference Drmota and Gittenberger18], Kersting [Reference Kersting34]). Let $L_n$ be the (height) profile of a conditioned Galton–Watson tree of order n, with an offspring distribution that has mean 1 and finite variance $\sigma^2$ . Then, as ${{n\to\infty}}$ ,

(1) \begin{align} n^{-1/2} L_n(x n^{1/2}) \overset{\textrm{d}}{\longrightarrow} \frac{\sigma}2L_\textbf{e}\Bigl({\frac{\sigma}2 x}\Bigr),\end{align}

in the space $C[0,\infty]$ , where $L_\textbf{e}$ is a random function that can be identified with the local time of a standard Brownian excursion $\textbf{e}$ ; this means that for every bounded measurable $f\,:\,[0,\infty)\to\mathbb R$ ,

(2) \begin{align}\int_0^\infty f(x) L_{{\textbf{e}}}(x) \,\textrm{d} x=\int_0^1 f\bigl({{\textbf{e}}(t)}\bigr)\,\textrm{d} t.\end{align}

Remark 1.3. This result is often stated with convergence (1) in the space $C[0,\infty)$ ; the version stated here with $C[0,\infty]$ is somewhat stronger but follows easily. (Note that the maximum is a continuous functional on $C[0,\infty]$ but not on $C[0,\infty)$ .) See further Section 2.4.

The profile discussed above is the profile of the distances from the vertices to the root. Consider now instead the distance profile, defined as the profile of all distances between pairs of points. (Again, see Section 2 for details.) One of our main results is the following analogue of Theorem 1.2.

Theorem 1.4. Let $\Lambda_n$ be the distance profile of a conditioned Galton–Watson tree of order n, with an offspring distribution that has mean 1 and finite variance $\sigma^2>0$ . Then, as ${{n\to\infty}}$ ,

(3) \begin{align} n^{-3/2}\Lambda_n\bigl({x n^{1/2}}\bigr)\overset{{d}}{\longrightarrow}\frac{\sigma}2\Lambda_{{\textbf{e}}}\Bigl({\frac{\sigma}2 x}\Bigr),\end{align}

in the space $C[0,\infty]$ , where $\Lambda_{{\textbf{e}}}(x)$ is a continuous random function that can be described as the density of distances between random points in the Brownian continuum random tree [Reference Aldous2Reference Aldous4]; equivalently, for a standard Brownian excursion $\textbf{e}$ , we have for every bounded measurable $f\,:\,[0,\infty)\to\mathbb R$ ,

(4) \begin{align}\int_0^\infty f(x) \Lambda_{{\textbf{e}}}(x) \,{d} x=2\iint_{0<s<t<1} f\bigl({{{\textbf{e}}}(s)+{{\textbf{e}}}(t)-2\min_{u\in[s,t]} {{\textbf{e}}}(u) }\bigr)\,{d} s\,{d} t.\end{align}

The random distance profile $\Lambda_n$ was earlier studied in [Reference Devroye and Janson16], where the estimate (123) below was shown.

Remark 1.5. It is easy to see that the random function $\Lambda_\textbf{e}$ really is random and not deterministic, e.g. as a consequence of Theorem 13.1. However, we do not know its distribution, although the expectation ${\mathbb E{}}\Lambda_\textbf{e}(x)$ is given in Lemma 15.11. In particular, the following problem is open. (See [Reference Drmota17, Section 4.2.1] for such results, in several different forms, for $L_\textbf{e}$ .)

Problem 1.6. Find a description of the (one-dimensional) distribution of $\Lambda_\textbf{e}(x)$ for fixed $x>0$ .

We have so far discussed rooted trees. However, the distance profile is defined also for unrooted trees, and we will find it convenient to use unrooted trees in parts of the proof. This leads us to consider random unrooted simply generated trees.

Simply generated families of rooted trees were introduced by Meir and Moon [Reference Meir and Moon42], leading to the notion of simply generated random rooted trees, see e.g. Drmota [Reference Drmota17]. This class of random rooted trees is one of the most popular classes of random trees, and these trees have been frequently studied in many different contexts by many authors. Simply generated random unrooted trees have been much less studied, but they have occurred, e.g. in a work on non-crossing trees by Kortchemski and Marzouk [Reference Kortchemski and Marzouk38] (see also Marckert and Panholzer [Reference Marckert and Panholzer41]). Nevertheless, we have not found a general treatment of them, so a minor purpose of the present paper is to do this in some detail, both for use in the paper and for future reference. We thus include (Sections 58) a general discussion of random unrooted simply generated trees, with some simple results relating them to rooted simply generated trees, allowing the transfer of many results for rooted simply generated trees to the unrooted case. Moreover, as part of the proof of Theorem 1.4, we prove the corresponding result (Theorem 11.2) for random unrooted simply generated trees.

As a preparation for the unrooted case, we also give (Section 4) some results (partly from Kortchemski and Marzouk [Reference Kortchemski and Marzouk38]) on modified rooted simply generated trees (Galton–Watson trees), where the root has different weights (offspring distribution) than all other vertices.

The central parts of the proof of Theorem 1.4 are given in Sections 1012, where we use both rooted and unrooted trees. As a preparation, in Section 9, we extend Theorem 1.2 to conditioned modified Galton–Watson trees. We later also extend Theorem 1.4 to conditioned modified Galton–Watson trees (Theorem 12.1).

We end the paper with some comments and further results related to our main results. In Section 13, we discuss a simple application to the Wiener index of unrooted simply generated trees. Section 14 contains some important moment estimations of the distance profile for conditioned Galton–Watson trees as well as for its continuous counterpart $\Lambda_\textbf{e}$ . In Section 15, we establish Hölder continuity properties of the continuous random function $L_\textbf{e}$ and $\Lambda_\textbf{e}$ . It is known that $L_\textbf{e}$ is a.s. Hölder continuous of order $\alpha$ (abbreviated to Hölder( $\alpha$ )) for $\alpha<\frac12$ , but not for $\alpha\geqslant\frac12$ . We show that $\Lambda_\textbf{e}$ is smoother; it is a.s. Hölder( $\alpha$ ) for $\alpha<1$ , and it is a.e. differentiable (Theorem 15.5). We do not know whether it is Lipschitz, or even continuously differentiable on $[0,\infty)$ , but we show that it is does not a.s. have a two-sided derivative at 0 (Theorem 15.10), and we state some open problems.

Finally, some further remarks are given in Section 16.

2. Some notation

Trees are finite except when explicitly said to be infinite. Trees may be rooted or unrooted; in a rooted tree, the root is denoted o. The rooted trees may be ordered or not. The unrooted trees will always be labelled; we do not consider unrooted unlabelled trees in the present paper.

If T is a tree, then its number of vertices is denoted by $|T|$ ; this is called the order or the size of T. (Unlike some authors, we do not distinguish between order and size.) The notation $v\in T$ means that v is a vertex in T.

The degree of a vertex $v\in T$ is denoted d(v). In a rooted tree, we also define the outdegree $d^+(v)$ as the number of children of v; thus,

(5) \begin{align} d^+(v)= \begin{cases} d(v)-1, & v\neq o,\\d(v), & v=o. \end{cases}\end{align}

A leaf in an unrooted tree is a vertex v with $d(v)=1$ . In a rooted tree, we instead require $d^+(v)=0$ ; this may make a difference only for the root.

A fringe subtree in a rooted tree is a subtree consisting of some vertex v and all its descendants. We regard v as the root of the fringe tree. The branches of a rooted tree are the fringe trees rooted at the children of the root. The number of branches thus equals the degree d(o) of the root.

Let $\mathfrak{T}_n$ be the set of all ordered rooted trees of order n, and let $\mathfrak{T}\,:\!=\,\bigcup_1^\infty\mathfrak{T}_n$ . Note that $\mathfrak{T}_n$ is a finite set; we may identify the vertices of an ordered rooted tree by finite strings of positive integers, such that the root is the empty string and the children of v are vi, $i=1,\dots,d(v)$ . (Thus, an ordered rooted tree is regarded as a subtree of the infinite Ulam–Harris tree.) In fact, it is well known that $|\mathfrak{T}_n|=\frac{1}{n}\binom{2n-2}{n-1}$ , the Catalan number $C_{n-1}$ .

Let $\mathfrak{L}_n$ be the set of all unrooted trees of order n, with the labels $1,\dots,n$ ; thus $\mathfrak{L}_n$ is the set of all trees on $[n]\,:\!=\,{\{{1,\dots,n}\}}$ . $\mathfrak{L}_n$ is evidently finite and by Cayley’s formula $|\mathfrak{L}_n|=n^{n-2}$ . Let $\mathfrak{L}\,:\!=\,\bigcup_1^\infty\mathfrak{L}_n$ .

A probability sequence is the same as a probability distribution on $\mathbb N_0\,:\!=\,{\{{0,1,2,\dots}\}}$ , i.e., a sequence $\textbf{p}=(p_k)_0^\infty$ with $p_k\geqslant0$ and $\sum_{k=0}^\infty p_k=1$ . The mean $\mu(\textbf{p})$ and variance $\sigma^2(\textbf{p})$ of a probability sequence are defined to be the mean and variance of a random variable with distribution $\textbf{p}$ , i.e.,

(6) \begin{align} \mu(\textbf{p})\,:\!=\,\sum_{k=0}^\infty kp_k, &&&\sigma^2(\textbf{p})\,:\!=\,\sum_{k=0}^\infty k^2p_k-\mu(\textbf{p})^2.\end{align}

We use $\overset{\textrm{d}}{\longrightarrow}$ and $\overset{\textrm{p}}{\longrightarrow}$ for convergence in distribution and in probability, respectively, for a sequence of random variables in some metric space; see e.g. [Reference Billingsley11]. Also, $\overset{\textrm{d}}{=}$ means convergence in distribution.

The total variation distance between two random variables X and Y in a metric space (or rather between their distributions) is defined by

(7) \begin{align} d_{\textrm{TV}}(X,Y)\,:\!=\,\sup_A\bigl\lvert{{\mathbb P{}}(X\in A)-{\mathbb P{}}(Y\in A) }\bigr\rvert,\end{align}

taking the supremum over all measurable subsets A. It is well known that in a complete separable metric space, there exists a coupling of X and Y (i.e., a joint distribution with the given marginal distributions) such that

(8) \begin{align} {\mathbb P{}}(X\neq Y) = d_{\textrm{TV}}(X,Y),\end{align}

and this is best possible.

$O_{\textrm{p}}(1)$ denotes a sequence of real-valued random variables $(X_n)_n$ that is stochastically bounded, i.e., for every $\varepsilon>0$ , there exists C such that ${\mathbb P{}}(|X_n|>C)\leqslant\varepsilon$ . This is equivalent to $(X_n)_n$ being tight. For tightness in more general metric spaces, see e.g. [Reference Billingsley11].

Let f be a real-valued function defined on an interval $I\subseteq\mathbb R$ . The modulus of continuity of f is the function $[0,\infty)\to[0,\infty]$ defined by, for $\delta\geqslant0$ ,

(9) \begin{align} \omega(\delta;\,f)=\omega(\delta;\,f;\,I)\,:\!=\,\sup\bigl({|f(s)-f(t)|\,:\, s,t\in I, |s-t|\leqslant\delta}\bigr).\end{align}

If x and y are real numbers, $x\land y\,:\!=\,\min{\{{x,y}\}}$ and $x\lor y\,:\!=\,\max{\{{x,y}\}}$ .

C denotes unspecified constants that may vary from one occurrence to the next. They may depend on parameters such as weight sequences or offspring distributions, but they never depend on the size of the trees. Sometimes we write, e.g., $C_r$ to emphasize that the constant depends on the parameter r.

Unspecified limits are as ${{n\to\infty}}$ .

2.1. Profiles

For two vertices v and w in a tree T, let $\textsf{d}(x,y)=\textsf{d}_T(x,y)$ denote the distance between v and w, i.e., the number of edges in the unique path joining v and w. In particular, in a rooted tree, $\textsf{d}(v,o)$ is the distance to the root, often called the depth (or sometimes height) of v.Footnote 1

For a rooted tree T, the height of T is $H(T)\,:\!=\,\max_{v\in T}\textsf{d}(v,o)$ , i.e., the maximum depth. The diameter of a tree T, rooted or not, is $\textrm{diam}(T)\,:\!=\,\max_{v,w\in T}\textsf{d}(v,w)$ .

The profile of a rooted tree is the function $L=L_T\,:\,\mathbb R\to[0,\infty)$ defined by

(10) \begin{align} L(i)\,:\!=\,\bigl\lvert{{\{{v\in T\,:\,\textsf{d}(v,o)=i}\}}}\bigr\rvert,\end{align}

for integers i, extended by linear interpolation to all real x. (We are mainly interested in $x\geqslant0$ , and trivially $L(x)=0$ for $x\leqslant -1$ , but it will be convenient to allow negative x.) The linear interpolation can be written as

(11) \begin{align} L(x)\,:\!=\,\sum_{i=0}^\infty L(i)\tau(x-i),\end{align}

where $\tau$ is the triangular function $\tau(x)\,:\!=\,(1-|x|)\lor0$ .

Note that $L(0)=1$ , and that L is a continuous function with compact support $[\!-1,H(T)+1]$ . Furthermore, since $\int\tau(x)\,\textrm{d} x=1$ ,

(12) \begin{align} \int_{-1}^\infty L(x)\,\textrm{d} x = \sum_{i=0}^\infty L(i) = |T|,\end{align}

where we integrate from $-1$ because of the linear interpolation; we have $ \int_0^\infty L(x)\,\textrm{d} x = \sum_{i=0}^\infty L(i) -\frac12 = |T|-\frac12$ .

The width of T is defined as

(13) \begin{align} W(T)\,:\!=\,\max_{i\in\mathbb N_0} L(i)=\max_{x\in\mathbb R} L(x).\end{align}

Similarly, in any tree T, rooted or unrooted, we define the distance profile as the function $\Lambda=\Lambda_T\,:\,[0,\infty)\to[0,\infty)$ defined by

(14) \begin{align}\Lambda(i)\,:\!=\,\bigl\lvert{{\{{(v,w)\in T\,:\,\textsf{d}(v,w)=i}\}}}\bigr\rvert\end{align}

for integers i, again extended by linear interpolaton to all real $x\geqslant0$ . For definiteness, we count ordered pairs in (14), and we include the case $v=w$ , so $\Lambda(0)=|T|$ . $\Lambda$ is a continuous function on $[0,\infty)$ with support $[\!-1,\textrm{diam}(T)+1]$ . We have, similarly to (12),

(15) \begin{align} \int_{-1}^\infty \Lambda(t)\,\textrm{d} t = \sum_{i=0}^\infty \Lambda(i)= |T|^2.\end{align}

If T is an unrooted tree, let T(v) denote the rooted tree obtained by declaring v as the root, for $v\in T$ . Then, as a consequence of (10) and (14),

(16) \begin{align} \Lambda_T(x)=\sum_{v\in T} L_{T(v)}(x).\end{align}

Hence, the distance profile can be regarded as the sum (or, after normalisation, average) of the profiles for all possible choices of a root.

Remark 2.1. Alternatively, one might extend L to a step function by $L(x)\,:\!=\,L(\lfloor x\rfloor)$ , and similarly for $\Lambda$ . The asymptotic results are the same (and equivalent by simple arguments), with L and $\Lambda$ elements of $D[0,\infty]$ instead of $C[0,\infty]$ and limit theorems taking place in that space. This has some advantages, but for technical reasons (e.g. simpler tightness criteria), we prefer to work in the space $C[0,\infty]$ of continuous functions.

Remark 2.2. Another version of $\Lambda$ would count unordered pairs of distinct vertices. The two versions are obviously equivalent and our results hold for the alternative version too, mutatis mutandis.

2.2. Brownian excursion and its local time

The standard Brownian excursion $\textbf{e}$ is a random continuous function ${[0,1]}\to[0,\infty)$ such that $\textbf{e}(0)=\textbf{e}(1)=0$ and $\textbf{e}(t)>0$ for $t\in(0,1)$ . Informally, $\textbf{e}$ can be regarded as a Brownian motion conditioned on these properties; this can be formalised as an appropriate limit [Reference Durrett, Iglehart and Miller23]. There are several other, quite different but equivalent, definitions, see e.g. [Reference Revuz and Yor49, XII.(2.13)], [Reference Blumenthal13, Example II.1d)], and [Reference Drmota17, Section 4.1.3].

The local time $L_\textbf{e}$ of $\textbf{e}$ is a continuous random function that is defined (almost surely) as a functional of $\textbf{e}$ satisfying

(17) \begin{align} \int_0^\infty f(x)L_\textbf{e}(x)\,\textrm{d} x = \int_0^1 f\bigl({\textbf{e}(t)}\bigr)\,\textrm{d} t,\end{align}

for every bounded (or non-negative) measurable function $f\,:\,[0,\infty)\to\mathbb R$ . In particular, (17) yields, for any $x\geqslant0$ and $\varepsilon>0$ ,

(18) \begin{align} \int_x^{x+\varepsilon}L_\textbf{e}(y)\,\textrm{d} y = \int_0^1 \boldsymbol1\{{{\textbf{e}(t)\in[x,x+\varepsilon)}}\}\,\textrm{d} t\end{align}

and thus

(19) \begin{align}L_\textbf{e}(x) = \lim_{\varepsilon\to0} \frac{1}{\varepsilon}\int_0^1 \boldsymbol1\{{{\textbf{e}(t)\in[x,x+\varepsilon)}}\}\,\textrm{d} t.\end{align}

Hence, $L_\textbf{e}(x)$ can be regarded as the occupation density of $\textbf{e}$ at the value x.

Note that the existence (almost surely) of a function $L_\textbf{e}(x)$ satisfying (17)–(19) is far from obvious; this is part of the general theory of local times for semimartingales, see e.g. [Reference Revuz and Yor49, Chapter VI]. The existence also follows from (some of) the proofs of Theorem 1.2.

2.3. Brownian continuum random tree

Given a continuous function $g\,:\,{[0,1]}\to[0,\infty)$ with $g(0)=g(1)=0$ , one can define a pseudometric $\textsf{d}$ on ${[0,1]}$ by

(20) \begin{align}\textsf{d}(s,t)= \textsf{d}(s,t;\,g)\,:\!=\,g(s)+g(t)-2\min_{u\in [s,t]}g(u),\qquad 0\leqslant s\leqslant t\leqslant 1.\end{align}

By identifying points with distance 0, we obtain a metric space $T_g$ , which is a compact real tree, see e.g. Le Gall [Reference Le Gall39, Theorem 2.2]. We denote the natural quotient map ${[0,1]}\to T_g$ by $\rho_g$ , and let $T_g$ be rooted at $\rho_{g}(0)$ . The Brownian continuum random tree defined by Aldous[Reference Aldous2Reference Aldous4] can be defined as the random real tree $T_{\textbf{e}}$ constructed in this way from the random Brownian excursion $\textbf{e}$ , see [Reference Le Gall39, Section 2.3]. (Aldous [Reference Aldous2Reference Aldous4] used another definition, and another scaling corresponding to $T_{2\textbf{e}}$ .) Note that using (20), (4) can be written

(21) \begin{align}\int_0^\infty f(x) \Lambda_\textbf{e}(x) \,\textrm{d} x=\iint_{s,t\in{[0,1]}} f\bigl({\textsf{d}(s,t;\,\textbf{e})}\bigr)\,\textrm{d} s\,\textrm{d} t,\end{align}

for any bounded (or non-negative) measurable function f. This means that $\Lambda_\textbf{e}$ is the density of the distance in $T_\textbf{e}$ between two random points, chosen independently with the probability measure on $T_\textbf{e}$ induced by the uniform measure on ${[0,1]}$ . This justifies the equivalence of the two definitions of $\Lambda_\textbf{e}$ stated in Theorem 1.4. As for the local time $L_\textbf{e}$ , the existence (almost surely) of a continuous function $\Lambda_\textbf{e}$ satisfying (21) is far from trivial; this will be a consequence of our proof.

An important feature of the Brownian continuum random tree is its re-rooting invariance property. More precisely, fix $s \in [0,1]$ and set

(22) \begin{align} \textbf{e}^{[s]}(t)= \begin{cases}\textsf{d}(s,s+t;\,\textbf{e}), & 0\leqslant t < 1-s\\\textsf{d}(s,s+t-1;\,\textbf{e}), &1-s \leqslant t \leqslant 1. \end{cases}\end{align}

Note that $\textbf{e}^{[s]}$ is a random continuous function ${[0,1]}\to[0,\infty)$ such that $\textbf{e}^{[s]}(0)=\textbf{e}^{[s]}(1)=0$ and a.s. $\textbf{e}^{[s]}(t)>0$ for $t\in(0,1)$ ; clearly, $\textbf{e}^{[0]} = \textbf{e}$ . By Duquesne and Le Gall [Reference Duquesne and Le Gall21, Lemma 2.2], the compact real tree $T_{\textbf{e}^{[s]}}$ is then canonically identified with the $T_{\textbf{e}}$ tree re-rooted at the vertex $\rho_{\textbf{e}}(s)$ . Marckert and Mokkadem [Reference Marckert and Mokkadem40, Proposition 4.9] (see also Duquesne and Le Gall [Reference Duquesne and Le Gall22, Theorem 2.2]) have shown that for every fixed $s \in [0,1]$ ,

(23) \begin{align} \textbf{e}^{[s]}\overset{\textrm{d}}{=} \textbf{e} \quad \text{and} \quad T_{\textbf{e}^{[s]}} =T_{\textbf{e}},\end{align}

in distribution. Thus, the re-rooted tree $T_{\textbf{e}^{[s]}}$ is a version of the Brownian continuum random tree.

Remark 2.3. Indeed, Aldous [Reference Aldous3, (20)] already observed that the Brownian continuum random tree is invariant under uniform re-rooting and that this property corresponds to the invariance of the law of the Brownian excursion under the path transformation (22) if $s = U$ is uniformly random on [0, 1] and independent of $\textbf{e}$ .

As a consequence of the previous re-rooting invariance property, we deduce the following explicit expression for the continuous function $\Lambda_\textbf{e}$ . For every fixed $s \in [0,1]$ , let $L_{\textbf{e}^{[s]}}$ denote the local time of $\textbf{e}^{[s]}$ , which is perfectly defined thanks to (23). It follows from (20), (21) and (22) that

\begin{align*}\int_0^\infty f(x) \Lambda_\textbf{e}(x) \,\textrm{d} x=\int_0^1\int_0^1 f\bigl({\textbf{e}^{[s]}(t)}\bigr)\,\textrm{d} s \,\textrm{d} t=\int_{0}^{1} \int_0^\infty f(x) L_{\textbf{e}^{[s]}}(x) \,\textrm{d} x\,\textrm{d} s,\end{align*}

for any bounded (or non-negative) measurable function f, or equivalently,

(24) \begin{align} \Lambda_\textbf{e}(x) =\int_{0}^{1} L_{\textbf{e}^{[s]}}(x) \,\textrm{d} s,\quad x \geqslant 0.\end{align}

In accordance with the discrete analogue of $\Lambda_\textbf{e}$ in (16), the identity (24) shows that $\Lambda_\textbf{e}$ can be regarded as the average of the profiles for all possible choices of a root in $T_{\textbf{e}}$ .

2.4. The function spaces $C[0,\infty)$ and $C[0,\infty]$

Recall that $C[0,\infty)$ is the space of continuous functions on $[0,\infty)$ and that convergence in $C[0,\infty)$ means uniform convergence on each compact interval [0, b]. As said in Remark 1.3, we prefer to state our results in the space $C[0,\infty]$ of functions that are continuous on the extended half-line $[0,\infty]$ . These are the functions f in $C[0,\infty)$ such that the limit $f(\infty)\,:\!=\,\lim_{{{x\to\infty}}}f(x)$ exists; in our case, this is a triviality since all random functions on both sides of (1) and (3), and in similar later statements, have compact support, and thus trivially extend continuously to $[0,\infty]$ with $f(\infty)=0$ . The important difference between $C[0,\infty)$ and $C[0,\infty]$ is instead the topology: convergence in $C[0,\infty]$ means uniform convergence on $[0,\infty]$ (or, equivalently, on $[0,\infty)$ ).

In particular, the supremum is a continuous functional on $C[0,\infty]$ , but not on $C[0,\infty)$ (where it also may be infinite). Thus, convergence of the width (after rescaling), follows immediately from Theorem 1.2 (see also the proof of Theorem 9.2); if this was stated with convergence in $C[0,\infty)$ , a small extra argument would be needed (more or less equivalent to showing convergence in $C[0,\infty]$ ).

In the random setting, the difference between the two topologies can be expressed as in the following lemma. See also [Reference Janson27, Proposition 2.4], for the similar case of the spaces $D[0,\infty]$ and $D[0,\infty)$ .

Lemma 2.4. Let $X_n(t)$ and X(t) be random functions in $C[0,\infty]$ . Then $X_n(t)\overset{{d}}{\longrightarrow} X(t)$ in $C[0,\infty]$ if and only if

  1. (i) $X_n(t)\overset{\textrm{d}}{\longrightarrow} X(t)$ in $C[0,\infty)$ , and

  2. (ii) $X_n(t)\overset{\textrm{p}}{\longrightarrow} X_n(\infty)$ , as ${{t\to\infty}}$ , uniformly in n; i.e., for every $\varepsilon>0$ ,

    (25) \begin{align} \sup_{n\geqslant1}{\mathbb P{}}\bigl({\sup_{u<t<\infty}|X_n(t)-X_n(\infty)|>\varepsilon}\bigr)\to 0,\qquad \text{as }u\to\infty. \end{align}

Proof. A straightforward exercise.

In our cases, such as (1) and (3), the condition (25) is easily verified from convergence (or just tightness) of the normalised height $H_n/\sqrt n$ , which can be used to bound the support of the left-hand sides. Hence, convergence in $C[0,\infty)$ and $C[0,\infty]$ is essentially equivalent.

Note that $C[0,\infty]$ is a separable Banach space, and that it is isomorphic to $C{[0,1]}$ by a change of variable; thus, general results for $C{[0,1]}$ may be transferred. Note also that all functions that we are interested in lie in the (Banach) subspace $C_0[0,\infty)\,:\!=\,{\{{f\in C[0,\infty]\,:\,f(\infty)=0}\}}$ . Hence, the results may just as well be stated as convergence in distribution in $C_0[0,\infty)$ .

3. Rooted simply generated trees

As a background, we recall first the definition of random rooted simply generated trees and the almost equivalent conditioned Galton–Watson trees, see e.g. [Reference Drmota17] or [Reference Janson30] for further details, and [Reference Athreya and Ney6] for more on Galton–Watson processes.

3.1. Simply generated trees

Let $\boldsymbol{\phi}=(\phi_k)_0^\infty$ be a given sequence of non-negative weights, with $\phi_0>0$ and $\phi_k>0$ for at least one $k\geqslant2$ . (The latter conditions exclude only trivial cases when the random tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ defined below either does not exist or is a deterministic path.)

For any rooted ordered tree $T\in\mathfrak{T}$ , define the weight of T as

(26) \begin{align} \phi(T)\,:\!=\,\prod_{v\in T} \phi_{d^+(v)}.\end{align}

For a given n, we define the random rooted simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ of order n as a random tree in $\mathfrak{T}_n$ with probability proportional to its weight; i.e.,

(27) \begin{align} {\mathbb P{}}({\mathcal T}^{{\boldsymbol{\phi}}}_n=T) \,:\!=\,\frac{\phi(T)}{\sum_{T'\in\mathfrak{T}_n}\phi(T')},\qquad T\in\mathfrak{T}_n.\end{align}

We consider only n such that at least one tree T with $\phi(T)>0$ exists.

A weight sequence $\boldsymbol{\phi}'=(\phi^{\prime}_k)_0^\infty$ with

(28) \begin{align} \phi^{\prime}_k=a b^k\phi_k,\qquad k\geqslant0,\end{align}

for some $a,b>0$ is said to be equivalent to $(\phi_k)_0^\infty$ . It is easily seen that equivalent weight sequences define the same random tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ , i.e., ${\mathcal T}^{{\boldsymbol{\phi}'}}_n\overset{\textrm{d}}{=}{\mathcal T}^{{\boldsymbol{\phi}}}_n$ .

3.2. Galton–Watson trees

Given a probability sequence $\textbf{p}=(p_k)_0^\infty$ , the Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}$ is the family tree of a Galton–Watson process with offspring distribution $\textbf{p}$ . This means that ${\mathcal T}^{{\textbf{p}}}$ is a random ordered rooted tree, which is constructed as follows: Start with a root and give it a random number of children with the distribution $\textbf{p}$ . Give each new vertex a random number of children with the same distribution and independent of previous choices, and continue as long as there are new vertices. In general, ${\mathcal T}^{{\textbf{p}}}$ may be an infinite tree. We will mainly consider the critical case when the expectation $\mu(\textbf{p})=1$ , and then it is well known that ${\mathcal T}^{{\textbf{p}}}$ is finite a.s. (We exclude the trivial case when $p_1=1$ .)

The size $|{\mathcal T}^{{\textbf{p}}}|$ of a Galton–Watson tree is random. Given $n\geqslant1$ , the conditioned Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}_n$ is defined as ${\mathcal T}^{{\textbf{p}}}$ conditioned on $|{\mathcal T}^{{\textbf{p}}}|=n$ . (We consider only n such that this happens with positive probability.) Consequently, ${\mathcal T}^{{\textbf{p}}}_n$ is a random ordered rooted tree of size n. It is easily seen that a conditioned Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}_n$ equals (in distribution) the simply generated tree with the weight sequence $\textbf{p}$ , and thus we use the same notation ${\mathcal T}^{{\textbf{p}}}_n$ for both.

A (conditioned) Galton–Watson tree is critical if its offspring distribution $\textbf{p}$ has mean $\mu(\textbf{p})=1$ . We will in the present paper mainly consider conditioned Galton–Watson trees that are critical and have a finite variance $\sigma^2(\textbf{p})$ ; this condition is rather mild, as is seen in the following subsection.

3.3. Equivalence

A random simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ with a weight sequence $(\phi_k)_0^\infty$ that is a probability sequence equals, as just said, the conditioned Galton–Watson tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ . Much more generally, any weight sequence $\boldsymbol{\phi}$ such that its generating function

(29) \begin{align} \Phi(z)\,:\!=\,\sum_{k=0}^\infty \phi_k z^k\end{align}

has positive radius of convergence is equivalent to some probability weight sequence; hence, ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ can be regarded as a conditioned Galton–Watson tree in this case too. Moreover, in many cases, we can choose an equivalent probability weight sequence that has mean 1 and finite variance; see e.g. [Reference Janson30, Section 4].

We will use this to switch between simply generated trees and conditioned Galton–Watson trees without comment in the sequel; we will use the name that seems best and most natural in different contexts.

3.4. Simply generated forests

The Galton–Watson process above starts with one individual. More generally, we may start with m individuals, which we may assume are numbered $1,\dots,m$ ; this yields a Galton–Watson forest consisting of m independent copies of ${\mathcal T}^{{\textbf{p}}}$ . Conditioning on the total size being $n\geqslant m$ , we obtain a conditioned Galton–Watson forest ${\mathcal T}^{{\textbf{p}}}_{n,m}$ , which thus consists of m random trees ${\mathcal T}^{{\textbf{p}}}_{n,m;\,1}, \dots,{\mathcal T}^{{\textbf{p}}}_{n,m;\,m}$ with $|{\mathcal T}^{{\textbf{p}}}_{n,m;\,1}|+\dots+|{\mathcal T}^{{\textbf{p}}}_{n,m;\,{m}}|=n$ . Conditioned on the sizes $|{\mathcal T}^{{\textbf{p}}}_{n,m;\,1}|,\dots,|{\mathcal T}^{{\textbf{p}}}_{n,m;\,m}|$ , the trees are independent conditioned Galton–Watson trees with the given sizes.

More generally, given any weight sequence $\boldsymbol{\phi}$ , a random simply generated forest ${\mathcal T}^{{\boldsymbol{\phi}}}_{n,m}$ is a random forest with m rooted trees and total size n, chosen with probability proportional to its weight, defined as in (26). Again, conditioned on their sizes, the trees are independent simply generated trees.

Thus, the distribution of the sizes of the trees in the forest is of major importance. Consider the Galton–Watson case, and let ${\mathcal T}^{{\textbf{p}}}_{n,m;\,(1)}, \dots,{\mathcal T}^{{\textbf{p}}}_{n,m;\,(m)}$ denote the trees arranged in decreasing order: $|{\mathcal T}^{{\textbf{p}}}_{n,m;\,(1)}|\geqslant\dots\geqslant|{\mathcal T}^{{\textbf{p}}}_{n,m;\,(m)}|$ . (Ties are resolved randomly, say; this applies tacitly to all similar situations.) We have the following general result, which was proved by Marzouk [38, Lemma 5.7(iii)] under an additional regularity hypothesis.

Lemma 3.1. Let $m\geqslant1$ be fixed, and consider the conditioned Galton–Watson forest ${\mathcal T}^{{{\textbf{p}}}}_{n,m}$ as ${{n\to\infty}}$ . Then

(30) \begin{align}|{\mathcal T}^{{{\textbf{p}}}}_{n,m;\,(i)}|= \begin{cases}n-O_{\textrm{p}}(1), & i=1\\O_{\textrm{p}}(1), & i=2,\dots,m. \end{cases}\end{align}

Proof. Suppose first that $\mu(\textbf{p})=1$ . Suppose also, for simplicity, that $p_m>0$ . Consider the conditioned Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}_{n+1}$ and condition on the event $\mathcal E_m$ that the root degree is m. Conditioned on $\mathcal E_m$ , there are m branches, which form a conditioned Galton–Watson forest ${\mathcal T}^{{\textbf{p}}}_{n,m}$ .

As ${{n\to\infty}}$ , the random tree ${\mathcal T}^{{\textbf{p}}}_{n+1}$ converges in distribution to an infinite random tree $\widehat{{\mathcal T}}$ (the size-biased Galton–Watson tree defined by Kesten [Reference Kesten35]), see [Reference Janson30, Theorem 7.1]. Moreover, ${\mathbb P{}}(\mathcal E_m)\to mp_m>0$ by [Reference Janson30, Theorem 7.10]. Hence, ${\mathcal T}^{{\textbf{p}}}_{n+1}$ conditioned on $\mathcal E_m$ converges in distribution to $\widehat{{\mathcal T}}$ conditioned on $\mathcal E_m$ . In other words, the forest ${\mathcal T}^{{\textbf{p}}}_{n,m}$ converges in distribution to the branches of $\widehat{{\mathcal T}}$ conditioned on having exactly m branches; denote this random limit by $({\mathcal T}_1,\dots,{\mathcal T}_m)$ . By the Skorohod coupling theorem [Reference Kallenberg32, Theorem 4.30], we may (for simplicity) assume that this convergence is a.s. The convergence here is in the local topology used in [Reference Janson30], which means [Reference Janson30, Lemma 6.2] that for any fixed $\ell\geqslant1$ , if $T^{[\ell]}$ denotes the tree T truncated at height $\ell$ , then a.s., for sufficiently large n, ${\mathcal T}^{{\textbf{p},[\ell]}}_{n,m;\,i}={\mathcal T}_i^{[\ell]}$ .

The infinite tree $\widehat{{\mathcal T}}$ has exactly one infinite branch; thus, there exists a (random) $j\leqslant m$ such that ${\mathcal T}_j$ is infinite but ${\mathcal T}_i$ is finite for $i\neq j$ . Truncating the trees at an $\ell$ chosen larger than the heights $H({\mathcal T}_i)$ for all $i\neq j$ , we see that for large n, ${\mathcal T}^{{\textbf{p}}}_{n,m;\,i}={\mathcal T}_i$ . Thus, $|{\mathcal T}^{{\textbf{p}}}_{n,m;\,i}|=O(1)$ for $i\neq j$ , and necessarily the remaining branch ${\mathcal T}^{{\textbf{p}}}_{n,m;\,j}$ has size $n-O(1)$ . Hence, for large enough n, ${\mathcal T}^{{\textbf{p}}}_{n,m;\,(1)}={\mathcal T}^{{\textbf{p}}}_{n,m;\,j}$ .

Consequently, ${\mathcal T}^{{\textbf{p}}}_{n,m;\,(2)},\dots,{\mathcal T}^{{\textbf{p}}}_{n,m;\,(m)}$ converge a.s., and thus in distribution, to the $m-1$ finite branches of $\widehat{{\mathcal T}}$ , arranged in decreasing order and conditioned on $\mathcal E_m$ . In particular, their sizes converge in distribution and are thus $O_{\textrm{p}}(1)$ .

We assumed for simplicity $p_m>0$ . In general, we may select a rooted tree T with $\geqslant m$ leaves, such that $p_{d^+(v)}>0$ for every $v\in T$ . Fix m leaves $v_1,\dots,v_m$ in T, and consider the conditioned Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}_{n+|T|-m}$ conditioned on the event $\mathcal E_T$ that it consists of T with subtrees added at $v_1,\dots,v_m$ . Then these subtrees form a conditioned Galton–Watson forest ${\mathcal T}^{{\textbf{p}}}_{n,m}$ , and we can argue as above, conditioning $\widehat{{\mathcal T}}$ on $\mathcal E_T$ .

This completes the proof when $\mu(\textbf{p})=1$ . If $\mu(\textbf{p})>1$ , there always exists an equivalent probability weight $\tilde{\textbf{p}}$ with $\mu(\tilde{\textbf{p}})=1$ , and the result follows. If $\mu(\textbf{p})<1$ , the same may hold, and if it does not hold, then there is a similar infinite limit tree $\widehat{{\mathcal T}}$ [Reference Janson30, Theorem 7.1]; in this case, $\widehat{{\mathcal T}}$ has one vertex of infinite degree, but the proof above holds with minor modifications.

Remark 3.2. The proof shows that in the case $\mu(\textbf{p})=1$ , the small trees ${\mathcal T}^{{\textbf{p}}}_{n,m;\,(2)},\dots,{\mathcal T}^{{\textbf{p}}}_{n,m;\,(m)}$ in the forest converge in distribution to $m-1$ independent copies of the unconditioned Galton–Watson tree ${\mathcal T}^{{\textbf{p}}}$ , arranged in decreasing order. More generally, the small trees converge in distribution to independent Galton–Watson trees for a probability distribution equivalent to $\textbf{p}$ . (This too was shown in [38, Lemma 5.7(iii)] under stronger assumptions.)

Remark 3.3. In the standard case $\mu(\textbf{p})=1$ , $\sigma^2(\textbf{p})<\infty$ , it is also easy to show Lemma 3.1 using the fact ${\mathbb P{}}(|{\mathcal T}^{{\textbf{p}}}|=n)\sim c n^{-3/2}$ , for some $c>0$ , which is a well-known consequence of the local limit theorem, cf. (36)–(37).

Problem 3.4. A simply generated forest ${\mathcal T}^{{\boldsymbol{\phi}}}_{n,m}$ is covered by Lemma 3.1 when the generating function (29) has positive radius of convergence, since then it is equivalent to a conditioned Galton–Watson forest. We conjecture that Lemma 3.1 holds for simply generated forests also in the case when the generating function has radius of convergence 0, but we leave this as an open problem.

4. Modified simply generated trees

One frequently meets random trees where the root has a special distribution, see, for example [Reference Kortchemski and Marzouk38, Reference Marckert and Panholzer41]. Thus, let $\boldsymbol{\phi}$ and $\boldsymbol{\phi}^0$ be two weight sequences, where $\boldsymbol{\phi}$ is as above, and $\boldsymbol{\phi}^0=(\phi^0_k)_0^\infty$ satisfies $\phi^0_k\geqslant0$ , with strict inequality for at least one k. We modify (26) and now define the weight of a tree $T\in\mathfrak{T}$ as

(31) \begin{align} \phi^*(T)\,:\!=\,\phi^0_{d^+(o)}\prod_{v\neq o} \phi_{d^+(v)}.\end{align}

The random modified simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}},{\boldsymbol{\phi}^0}}_n$ is defined as in (27), using the modified weight (31).

We say that a pair $(\boldsymbol{\phi}',\boldsymbol{\phi}^{0\prime})$ is equivalent to $(\boldsymbol{\phi},\boldsymbol{\phi}^0)$ if (28) holds and similarly

(32) \begin{align} \phi^{0\prime}_k=a_0 b^k\phi^0_k,\qquad k\geqslant0.\end{align}

It is important that the same b is used in (28) and (32), while a and $a_0$ may be different. It is easy to see that equivalent pairs of weight sequences define the same modified simply generated tree.

Similarly, given two probability sequences $\textbf{p}=(p_k)_0^\infty$ and $\textbf{p}^0=({p}^0_k)_0^\infty$ , we define the modified Galton–Watson tree ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}$ and conditioned modified Galton–Watson tree ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ as in Section 3.2, but now giving children to the root with distribution $\textbf{p}^0$ , and to everyone else with distribution $\textbf{p}$ .

Again, as indicated by our notation, we have an equality: the conditioned modified Galton–Watson tree ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ equals the modified simply generated tree with weight sequences $\textbf{p}$ and $\textbf{p}^0$ . Conversely, if two weight sequences $\boldsymbol{\phi}$ and $\boldsymbol{\phi}^0$ both have positive radius of convergence, then it is possible (by taking b small enough) to find equivalent weight sequences $\boldsymbol{\phi}'$ and $\boldsymbol{\phi}^{0\prime}$ that are probability sequences, and thus ${\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n= {\mathcal T}^{{\boldsymbol{\phi}',\boldsymbol{\phi}^{0\prime}}}_n$ can be interpreted as a conditioned modified Galton–Watson tree.

Lemma 4.1. Consider a modified simply generated tree ${\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n$ and denote its branches by $T_1,\dots,T_{d(o)}$ .

  1. (i) Conditioned on the root degree d(o), the branches form a simply generated forest ${\mathcal T}^{{\boldsymbol{\phi}}}_{n-1,d(o)}$ .

  2. (ii) Conditioned on the root degree d(o) and the sizes $|T_i|$ of the branches, the branches are independent simply generated trees ${\mathcal T}^{{\boldsymbol{\phi}}}_{|T_i|}$ .

Proof. Exercise.

Note that Lemma 4.1 applies also to the simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ (by taking $\boldsymbol{\phi}^0=\boldsymbol{\phi}$ ). Thus, conditioned on the root degree the branches have the same distribution for ${\mathcal T}^{{\boldsymbol{\phi}}}_n$ and ${\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n$ . Hence, the distribution of the root degree is of central importance. The following lemma is partly shown by [Reference Kortchemski and Marzouk38, Proposition 5.6] in greater generality (the stable case), although we add the estimate (35).

Lemma 4.2. (mainly Kortchemski and Marzouk [Reference Kortchemski and Marzouk38]) Suppose that ${\textbf{p}}$ is a probability sequence with mean $\mu({\textbf{p}})=1$ and variance $\sigma^2({\textbf{p}})\in(0,\infty)$ and that ${\textbf{p}}^0$ is a probability sequence with finite mean $\mu({\textbf{p}}^0)$ . Then the root degree d(o) in the conditioned modified Galton–Watson tree ${\mathcal T}_n^{{{\textbf{p}},{\textbf{p}}^0}}$ converges in distribution to a random variable $\widetilde D$ with distribution

(33) \begin{align} {\mathbb P{}}(\widetilde D=k)= \frac{k{p}^0_k}{\sum_{j=1}^\infty j{p}^0_j}= \frac{k{p}^0_k}{\mu({\textbf{p}}^0)}.\end{align}

In other words, for every fixed $k\geqslant0$ ,

(34) \begin{align} {\mathbb P{}}(d(o)=k)\to{\mathbb P{}}(\widetilde D=k),\qquad{\rm{as}}\,n \to \infty.\end{align}

Moreover, if n is large enough, we have uniformly

(35) \begin{align} {\mathbb P{}}(d(o)=k)\leqslant 2{\mathbb P{}}(\widetilde D=k),\qquad k\geqslant1.\end{align}

As a consequence, ${\mathbb E{}}\widetilde D<\infty$ if and only if $\sigma^2(\textbf{p}^0)<\infty$ .

Proof. This uses well-known standard arguments, but we give a full proof for completeness; see also [Reference Kortchemski and Marzouk38]. Let D be the root degree in the modified Galton–Watson tree ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}$ . If $D=k$ , then the rest of the tree consists of k independent copies of ${\mathcal T}^{{\textbf{p}}}$ . Thus, the conditional probability ${\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n\mid D=k}\bigr)$ equals the probability that a Galton–Watson process started with k individuals has in total $n-1$ individuals; hence, by a formula by Dwass [Reference Dwass24], see e.g. [Reference Janson30, Section 15] and the further references there,

(36) \begin{align} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n\mid D=k}\bigr)=\frac{k}{n-1}{\mathbb P{}}\bigl({S_{n-1}=n-k-1}\bigr),\end{align}

where $S_{n-1}$ denotes the sum of $n-1$ independent random variables with distribution $\textbf{p}$ .

Suppose for simplicity that the distribution $\textbf{p}$ is aperiodic, i.e., not supported on any subgroup $d\mathbb N$ . (The general case follows similarly using standard modifications.) It then follows by the local limit theorem, see e.g. [Reference Kolchin36, Theorem 1.4.2] or [Reference Petrov46, Theorem VII.1], that, as ${{n\to\infty}}$ ,

(37) \begin{align} {\mathbb P{}}\bigl({S_{n-1}=n-k-1}\bigr)= \frac{1}{\sqrt{2\pi\sigma^2 n}}\bigl({e^{-k^2/(2n\sigma^2)}+o(1)}\bigr),\end{align}

uniformly in k. Consequently, combining (36) and (37) with ${\mathbb P{}}(D=k)={p}^0_k$ ,

(38) \begin{align} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n \text{ and } D=k}\bigr)&=\frac{k{p}^0_k}{n-1}{\mathbb P{}}\bigl({S_{n-1}=n-k-1}\bigr)\notag\\&= c\frac{k{p}^0_k}{n^{3/2}}\bigl({e^{-k^2/(2n\sigma^2)}+o(1)}\bigr),\end{align}

uniformly in k, where $c\,:\!=\,(2\pi\sigma^2)^{-1/2}$ .

Summing over k we find as ${{n\to\infty}}$ , using $\sum k{p}^0_k<\infty$ and monotone convergence,

(39) \begin{align} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)&=\frac{c}{n^{3/2}}\Bigl({\sum_{k=1}^\infty {k{p}^0_k}e^{-k^2/(2n\sigma^2)}+o(1)}\Bigr)\sim \frac{c}{n^{3/2}}\sum_{k=1}^\infty {k{p}^0_k}.\end{align}

Thus, combining (38) and (39), for any fixed $k\geqslant1$ , as ${{n\to\infty}}$ ,

(40) \begin{align} {\mathbb P{}}\bigl({D=k\mid|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)&=\frac{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n\text{ and } D=k}\bigr)}{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)}\notag\\&\to \frac{k{p}^0_k}{\sum_{j=1}^\infty j{p}^0_j}.\end{align}

The limits on the right-hand side sum to 1, and thus the result (33) follows.

Moreover, (38) and (39) also yield

(41) \begin{align} {\mathbb P{}}\bigl({D=k\mid|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)&=\frac{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n\text{ and } D=k}\bigr)}{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)}\notag\\&\leqslant \frac{k{p}^0_k}{\sum_{j=1}^\infty j{p}^0_j}\bigl({1+o(1)}\bigr),\end{align}

uniformly in k. In particular, (35) holds for all k if n is large enough.

Finally, by (33), ${\mathbb E{}}\widetilde D=\sum k{\mathbb P{}}(\widetilde D=k)<\infty$ if and only if $\sum_k k^2{p}^0_k<\infty$ .

It follows from Lemma 4.2 that the tree is overwhelmingly dominated by one branch. (Again, this was shown by Kortchemski and Marzouk [Reference Kortchemski and Marzouk38] in greater generality.)

Lemma 4.3. (essentially Kortchemski and Marzouk [38, Proposition 5.6]) Suppose that ${\textbf{p}}$ is a probability sequence with mean $\mu({\textbf{p}})=1$ and variance $\sigma^2({\textbf{p}})\in(0,\infty)$ and that ${\textbf{p}}^0$ is a probability sequence with finite mean $\mu({\textbf{p}}^0)$ . Let ${\mathcal T}_{n,(1)},\dots,{\mathcal T}_{n,({d(o)})}$ be the branches of ${\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n$ arranged in decreasing order. Then

(42) \begin{align} |{\mathcal T}_{n,(1)}|=n-O_{{p}}(1).\end{align}

Proof. Let $D_n=d(o)$ and condition on $D_n=m$ , for a fixed m. Then, by Lemmas 4.1 and 3.1, (42) holds. In other words, for every $\varepsilon>0$ , there exists $C_{m,\varepsilon}$ such that

(43) \begin{align} {\mathbb P{}}\bigl({n-|{\mathcal T}_{n,(1)}|>C_{m,\varepsilon}\mid D_n=m}\bigr) <\varepsilon.\end{align}

By Lemma 4.2, $D_n\overset{\textrm{d}}{\longrightarrow}\widetilde D$ , and thus $(D_n)_n$ is tight, i.e., $O_{\textrm{p}}(1)$ , so there exists M such that ${\mathbb P{}}(D_n>M)<\varepsilon$ for all n. Consequently, if $C_{\varepsilon}\,:\!=\,\max_{m\leqslant M}C_{m,\varepsilon}$ ,

(44) \begin{align} {\mathbb P{}}\bigl({n-|{\mathcal T}_{n,(1)}|>C_{\varepsilon}}\bigr)&={\mathbb E{}} {\mathbb P{}}\bigl({n-|{\mathcal T}_{n,(1)}|>C_{\varepsilon}\mid D_n}\bigr)\leqslant \varepsilon + {\mathbb P{}}(D_n>M)\notag\\&\leqslant 2\varepsilon,\end{align}

which completes the proof.

Lemmas 4.14.3 make it possible to transfer many results that are known for simply generated trees (conditioned Galton–Watson trees) to the modified version. See Section 9 for a few examples.

Problem 4.4. Does Lemma 4.2 (and thus Lemma 4.3) hold without assuming finite variance $\sigma^2(\textbf{p})<\infty$ . i.e., assuming only $\mu(\textbf{p})=1$ and $\mu(\textbf{p}^0)<\infty$ ? As said above, (33) was shown also when the variance is infinite by Kortchemski and Marzouk [Reference Kortchemski and Marzouk38], but they then assume that $\textbf{p}$ is in the domain of attraction of a stable distribution. What happens without this regularity assumption?

Remark 4.5. We assume in Lemma 4.2 that $\mu(\textbf{p}^0)<\infty$ . We claim that if $\mu(\textbf{p}^0)=\infty$ , then $d(o)\overset{\textrm{p}}{\longrightarrow}\infty$ ; in other words, ${\mathbb P{}}(d(o)=k)\to0$ for every fixed k, which can be seen as the natural interpretation of (33)–(34) in this case.

We sketch a proof. First, from (38) and Fatou’s lemma (for sums),

(45) \begin{align} \liminf_{{{n\to\infty}}}n^{3/2} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)&\geqslant\sum_{k=0}^\infty \liminf_{{{n\to\infty}}}n^{3/2} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n \text{ and } D=k}\bigr)\notag\\&=\sum_{k=0}^\infty c k{p}^0_k=\infty.\end{align}

In other words, $ n^{3/2} {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)\to \infty$ . Then, (38) and (45) yield, for any fixed $k\geqslant0$ ,

(46) \begin{align} {\mathbb P{}}\bigl({D =k \mid|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)&=\frac{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n\text{ and } D=k}\bigr)}{ {\mathbb P{}}\bigl({|{\mathcal T}^{{\textbf{p},\textbf{p}^0}}|=n}\bigr)} \to 0.\end{align}

This proves our claim.

5. Unrooted simply generated trees

We make definitions corresponding to Section 3 for unrooted trees. In this case, we consider labelled trees, so that we can distinguish the vertices. (This is not needed for ordered trees, since their vertices can be labelled canonically as described in Section 2.) Of course, we may then ignore the labelling when we want.

Let $(w_k)_0^\infty$ be a given sequence of non-negative weights, with $w_1>0$ and $w_k>0$ for some $k\geqslant3$ . (The weight $w_0$ is needed only for the trivial case $n=1$ and might be ignored. We may take $w_0=0$ without essential loss of generality.)

For any labelled tree $T\in\mathfrak{L}_n$ , now define the weight of T as

(47) \begin{align} w(T)\,:\!=\,\prod_{v\in T} w_{d(v)}.\end{align}

Given $n\geqslant1$ , we define the random unrooted simply generated tree ${\mathcal T}^\circ_{n}={\mathcal T}^{\textbf{w},\circ}_{n}$ as a labelled tree in ${\mathcal L}_n$ , chosen randomly with probability proportional to the weight (47). (We consider only n such that at least one tree of positive weight exists.)

Remark 5.1. Just as in the rooted case, replacing the weight sequence by an equivalent one (still defined as in (28)) gives the same random tree ${\mathcal T}^\circ_{n}$ .

In the following sections, we give three (related but different) relations with the more well-known rooted simply generated trees.

6. Mark a vertex

Let ${\mathcal T}^{\textbf{w},\circ}_{n}$ be a random unrooted simply generated tree as in Section 5 and mark one of its n vertices, chosen uniformly at random. Regard the marked vertex as a root and denote the resulting rooted tree by ${\mathcal T}^{\textbf{w},\bullet}_n$ .

Thus, ${\mathcal T}^{\textbf{w},\bullet}_n$ is a random unordered rooted tree, where an unordered rooted tree T has probability proportional to its weight given by (47).

We make ${\mathcal T}^{\textbf{w},\bullet}_n$ ordered by ordering the children of each vertex uniformly at random; denote the resulting random labelled ordered rooted tree by ${\mathcal T}^{\textbf{w},*}_n$ . Since each vertex v has $d^+(v)!$ possible orders, the probability that ${\mathcal T}^{\textbf{w},*}_n$ equals a given ordered tree T is proportional to the weight

(48) \begin{align} w^{*}(T) \,:\!=\,\frac{w(T)}{\prod_{v\in T}d^+(v)!}=\frac{w_{d(o)}}{d(o)!}\prod_{v\neq o}\frac{w_{d(v)}}{d^+(v)!}=\frac{w_{d(o)}}{d(o)!}\prod_{v\neq o}\frac{w_{d^+(v)+1}}{d^+(v)!}.\end{align}

The tree ${\mathcal T}^{\textbf{w},*}_n$ is constructed as a labelled tree, but each ordered rooted tree $T\in\mathfrak{T}_n$ has the same number $n!$ of labellings, and they have the same weight (48) and thus appear with the same probability. Hence, we may forget the labelling and regard ${\mathcal T}^{\textbf{w},*}_n$ as a random ordered tree in $\mathfrak{T}_n$ , with probabilities proportional to the weight (48). This is the same as the weight (31) with

(49) \begin{align}\phi_{k}&\,:\!=\,\frac{w_{k+1}}{k!},\qquad k\geqslant0,\end{align}
(50) \begin{align} \phi^0_{k}&\,:\!=\,\frac{w_{k}}{k!},\qquad k\geqslant0.\end{align}

Thus, ${\mathcal T}^{\textbf{w},*}_n={\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n$ , the modified simply generated tree defined in Section 4.

We recover ${\mathcal T}^{\textbf{w},\circ}_{n}$ from ${\mathcal T}^{*}_n={\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n$ by ignoring the root (and adding a uniformly random labelling). This yields thus a method to construct ${\mathcal T}^{\textbf{w},\circ}_{n}$ .

Example 6.1. Marckert and Panholzer [Reference Marckert and Panholzer41] studied uniformly random non-crossing trees of a given size n and found that if they are regarded as ordered rooted trees, then they have the same distribution as the conditioned modified Galton–Watson tree ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ , where

(51) \begin{align} p_k&=4(k+1)3^{-k-2}, k\geqslant0,\end{align}
(52) \begin{align} {p}^0_k&=2\cdot3^{-k}, k\geqslant1.\end{align}

These weights are equivalent to $\phi_k=k+1$ and $\phi^0_k=1$ , which are given by (49)–(50) with $w_k=k!$ . We may thus reformulate the result by Marckert and Panholzer [Reference Marckert and Panholzer41] as: A uniformly random non-crossing tree is the same as a random unrooted simply generated tree with weights $w_k=k!$ .

More generally, Kortchemski and Marzouk [Reference Kortchemski and Marzouk38] studied simply generated non-crossing trees, which are random non-crossing trees with probability proportional to the weight (47) for some weight sequence $\textbf{w}=(w_k)_k$ , and showed that they (under a condition) are equivalent to conditioned modified Galton–Watson trees. In fact, for any weight sequence $\textbf{w}$ , the proofs in [Reference Marckert and Panholzer41, in particular Lemma 2] and [Reference Kortchemski and Marzouk38, in particular Proposition 2.1] show that the simply generated non-crossing tree, regarded as an ordered rooted tree, is the same as ${\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n$ with

(53) \begin{align} \phi_k&\,:\!=\,(k+1)w_{k+1},\quad k\geqslant0,\end{align}
(54) \begin{align} \phi^0_k&\,:\!=\,w_k,\quad k\geqslant0.\end{align}

Thus, comparing with (49)–(50), it follows that the simply generated non-crossing tree is an unordered simply generated tree, with weight sequence $\overline{w}_k\,:\!=\,w_kk!$ .

Note that non-crossing trees are naturally defined as unrooted trees. A root is introduced in [Reference Kortchemski and Marzouk38, Reference Marckert and Panholzer41] for the analysis, which as said above makes the trees conditioned modified Galton–Watson trees (or, more generally, modified simply generated trees). This is precisely the marking of an unrooted simply generated tree discussed in the present section.

Remark 6.2. The constructions in this and the next section lead to simple relations of generating functions (not used here); see [Reference Berzunza Ojeda and Janson10, Appendix B].

7. Mark an edge

In the random unrooted tree ${\mathcal T}^{\textbf{w},\circ}_{n}$ , mark a (uniformly) random edge, and give it a direction; i.e., mark two adjacent vertices, say $o_+$ and $o_-$ . Since each tree ${\mathcal T}^{\textbf{w},\circ}_{n}$ has the same number $n-1$ of edges, the resulting marked tree ${\mathcal T}^{\textbf{w},\bullet\bullet}_n$ is distributed over all labelled trees on [n] with a marked and directed edge with probabilities proportional to the weight (47).

Now ignore the marked edge, and regard the tree ${\mathcal T}^{\textbf{w},\bullet\bullet}_n$ as two rooted trees ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ with roots $o_+$ and $o_-$ , respectively. Furthermore, order randomly the children of each vertex in each of these rooted trees; this makes ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ a pair of ordered trees, and each pair $(T_+,T_-)$ of labelled ordered rooted trees with $|T_+|+|T_-|=n$ and the labels $1,\dots,n$ appears with probability proportional to

(55) \begin{align} \widehat{w}(T_+,T_-)&\,:\!=\,\widehat{w}(T_+)\widehat{w}(T_-).\end{align}

where, for a rooted tree T,

(56) \begin{align}\widehat{w}(T)&\,:\!=\,\prod_{v\in T}\frac{w_{d^+(v)+1}}{d^+(v)!}.\end{align}

Using again the definition (49), we have by (26),

(57) \begin{align} \widehat{w}(T)=\prod_{v\in T}\phi_{d^+(v)}=\phi(T).\end{align}

Moreover, since we now have ordered rooted trees, the vertices are distinguishable, and each pair $(T_+,T_-)$ of ordered trees with $|T_+|+|T_-|=n$ has the same number $n!$ of labellings. Hence, we may ignore the labelling and regard the marked tree ${\mathcal T}^{\textbf{w},\bullet\bullet}_n$ as a pair of ordered trees $({\mathcal T}_{n,1},{\mathcal T}_{n,2})$ with $|{\mathcal T}_{n,1}|+|{\mathcal T}_{n,2}|=n$ and probabilities proportional to the weight given by (55) and (57). This means that ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ , conditioned on their sizes, are two independent random rooted simply generated trees, with the weight sequence $\boldsymbol{\phi}$ given by (49); in other words, $({\mathcal T}_{n,1},{\mathcal T}_{n,2})$ is a simply generated forest ${\mathcal T}^{{\boldsymbol{\phi}}}_{n,2}$ .

Consequently, we can construct the random unrooted simply generated tree ${\mathcal T}^{\textbf{w},\circ}_{n}$ by taking two random rooted simply generated trees ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ constructed in this way, with the right distribution of their sizes, and joining their roots.

Note that $|{\mathcal T}_{n,1}|=n-|{\mathcal T}_{n,2}|$ is random, with a distribution given by the construction above. More precisely, if $a_n$ is the total weight (57) summed over all ordered trees of order n, then

(58) \begin{align} {\mathbb P{}}\bigl({|{\mathcal T}_{n,1}|=m}\bigr)=\frac{a_ma_{n-m}}{\sum_{k=1}^{n-1}a_ka_{n-k}}.\end{align}

Remark 7.1. If $\phi_2>0$ , we can alternatively describe the result as follows: Use the weight sequence $(\phi_k)_0^\infty$ given by (49) and take a random rooted simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}}}_{n+1}$ of order $n+1$ , conditioned on the root degree $=2$ ; remove the root and join its two neighbours to each other (this is the marked edge).

If $\phi_2=0$ , we can instead take any $k>2$ with $\phi_k>0$ , and take a random rooted simply generated tree ${\mathcal T}^{{\boldsymbol{\phi}}}_{n+k-1}$ of order $n+k-1$ , conditioned on the event that the root degree is k, and the $k-2$ last children of the root are leaves; we remove the root and these children, and join the first two children.

Remark 7.2. Suppose that the weight sequence $(\phi_j)_0^\infty$ given by (49) satisfies $\sum_{j=1}^\infty\phi_j=1$ , so $(\phi_j)_0^\infty$ is a probability distribution. (Note that a large class of examples can be expressed with such weights, see Remark 5.1.) Then the construction above can be stated as follows:

Consider a Galton–Watson process with offspring distribution $(\phi_k)_0^\infty$ , starting with two individuals, and conditioned on the total progeny being n. This creates a forest with two trees; join their roots to obtain ${\mathcal T}^{\textbf{w},\circ}_{n}$ .

Note that it follows from the arguments above that if we mark the edge joining the two roots, then the marked edge will be distributed uniformly over all edges in the tree ${\mathcal T}^{\textbf{w},\circ}_{n}$ .

In the construction above, ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ have the same distribution by symmetry. Now define ${\mathcal T}_{n,+}$ as the largest and ${\mathcal T}_{n,-}$ as the smallest of ${\mathcal T}_{n,1}$ and ${\mathcal T}_{n,2}$ . The next lemma shows that (at least under a weak condition), ${\mathcal T}_{n,-}$ is stochastically bounded, so ${\mathcal T}^{\textbf{w},\circ}_{n}$ is dominated by the subtree ${\mathcal T}_{n,+}$ .

Lemma 7.3. Suppose that the generating function $\Phi(z)$ in (29) has a positive radius of convergence. Then, as ${{n\to\infty}}$ , ${\mathcal T}_{n,-}\overset{{d}}{\longrightarrow}{\mathcal T}^{{{\textbf{p}}}}$ , an unconditioned Galton–Watson tree with offspring distribution ${\textbf{p}}$ equivalent to $(\phi_k)_0^\infty$ . In particular, $|{\mathcal T}_{n,-}|=O_{{p}}(1)$ , and thus $|{\mathcal T}_{n,+}|=n-O_{{p}}(1)$ .

Proof. This is a special case of Lemma 3.1, see also Remark 3.2.

As remarked in Problem 3.4, we conjecture that $|{\mathcal T}_{n,-}|=O_{\textrm{p}}(1)$ also when the generating function has radius of convergence 0, but we leave this as an open problem.

8. Mark a leaf

This differs from the preceding two sections in that we do not recover the distribution of ${\mathcal T}^{\textbf{w},\circ}_{n}$ exactly, but only asymptotically.

Let $N_0(T)$ be the number of leaves in an unrooted tree T. Let $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ be a random unrooted labelled tree with probability proportional to $N_0(T)w(T)$ ; in other words, we bias the distribution of ${\mathcal T}^{\textbf{w},\circ}_{n}$ by the factor $N_0(T)$ .

Let $\widehat{{\mathcal T}}^{\textbf{w},\bullet}_{n}$ be the random rooted tree obtained by marking a uniformly random leaf in $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ , regarding the marked leaf as the root. Then, any pair (T, o) with $T\in\mathfrak{L}_n$ and $o\in T$ with $d(o)=1$ will be chosen as $\widehat{{\mathcal T}}^{\textbf{w},\bullet}_{n}$ and its root with probability proportional to the weight (47). We order the children of each vertex at random as in Sections 6 and 7, and obtain an ordered rooted tree $\widehat{{\mathcal T}}^{\textbf{w},*}_{n}$ . Then each tree with root degree 1 appears with probability proportional to (48).

Consequently, if we ignore the labelling, $\widehat{{\mathcal T}}^{\textbf{w},*}_{n}={\mathcal T}^{{\boldsymbol{\phi}},{\boldsymbol{\phi}^0}}_n$ , where $\boldsymbol{\phi}$ is given by (49), and $\phi^0_k\,:\!=\,\delta_{k1}$ (with a Kronecker delta). Equivalently, $\widehat{{\mathcal T}}^{\textbf{w},*}_{n}$ has a root of degree 1, and its single branch is a ${\mathcal T}^{{\boldsymbol{\phi}}}_{n-1}$ .

Conversely, we may obtain $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ from ${\mathcal T}^{{\boldsymbol{\phi}}}_{n-1}$ by adding a new root under the old one and then adding a random labelling.

Remark 8.1. The construction above can also be regarded as a variant of the one in Section 7, where we mark an edge such that one endpoint is a leaf. Then, in the notation there, $|{\mathcal T}_{n,-}|=1$ and ${\mathcal T}_{n,+}={\mathcal T}^{{\boldsymbol{\phi}}}_{n-1}$ .

As said above, $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ does not have the distribution of ${\mathcal T}^{\textbf{w},\circ}_{n}$ , but it is not far from it.

Lemma 8.2. Let ${\textbf{w}}$ be any weight sequence. As ${{n\to\infty}}$ , the total variation distance $d_{{TV}}(\widehat{{\mathcal T}}^{{\textbf{w}},\circ}_{n},{\mathcal T}^{{\textbf{w}},\circ}_{n})\to0$ . In other words, there exists a coupling such that ${\mathbb P{}}\bigl({\widehat{{\mathcal T}}^{{\textbf{w}},\circ}_{n}\neq{\mathcal T}^{{\textbf{w}},\circ}_{n}}\bigr)\to0$ .

Proof. We may construct ${\mathcal T}^{\textbf{w},\circ}_{n}$ as in Section 7 from two random ordered trees ${\mathcal T}_{n,+}$ and ${\mathcal T}_{n,-}$ , where $|{\mathcal T}_{n,-}|=O_{\textrm{p}}(1)$ . Conditioned on $|{\mathcal T}_{n,-}|=\ell$ , for any fixed $\ell$ , we have ${\mathcal T}_{n,+}\overset{\textrm{d}}{=} {\mathcal T}^{{\boldsymbol{\phi}}}_{n-\ell}$ , where $\boldsymbol{\phi}$ is given by (49). Thus, by [Reference Janson30, Theorem 7.11] (see comments there for earlier references to special cases, and to further results), as ${{n\to\infty}}$ , conditioned on $|{\mathcal T}_{n,-}|=\ell$ for any fixed $\ell$ ,

(59) \begin{align} \frac{N_0({\mathcal T}_{n,+})}{n}\overset{\textrm{d}}{=}\frac{N_o({\mathcal T}^{{\boldsymbol{\phi}}}_{n-\ell})}{n}\overset{\textrm{p}}{\longrightarrow} \pi_0,\end{align}

for some constant $\pi_0>0$ . (If $\boldsymbol{\phi}$ is a probability sequence, then $\pi_0=\phi_0.)$ Furthermore, $N_0({\mathcal T}^{\textbf{w},\circ}_{n})=N_0({\mathcal T}_{n,+})+N_0({\mathcal T}_{n,-})=N_0({\mathcal T}_{n,+})+O(1)$ , since $N_0({\mathcal T}_{n,-})\leqslant|{\mathcal T}_{n,-}|=\ell$ . Consequently, still conditioned on $|{\mathcal T}_{n,-}|=\ell$ for any fixed $\ell$ ,

(60) \begin{align} \frac{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}{n}\overset{\textrm{p}}{\longrightarrow} \pi_0>0.\end{align}

Since $ |{\mathcal T}_{n,-}|=O_{\textrm{p}}(1)$ , it follows that (60) holds also unconditionally.

Since ${N_0({\mathcal T}^{\textbf{w},\circ}_{n})}/{n}\leqslant1$ , dominated convergence yields

(61) \begin{align} \frac{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}{n}={\mathbb E{}} \frac{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}{n}\to \pi_0.\end{align}

By (60) and (61),

(62) \begin{align}\frac{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}\overset{\textrm{p}}{\longrightarrow}1,\end{align}

and thus, by dominated convergence again,

(63) \begin{align}{\mathbb E{}}\Bigl\lvert{ \frac{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}-1}\Bigr\rvert\to0.\end{align}

The definition of $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ by biasing means that for any bounded (or non-negative) function $f\,:\,\mathfrak{L}_n\to\mathbb R$ ,

(64) \begin{align} {\mathbb E{}} f(\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}) =\frac{{\mathbb E{}}\bigl[{f({\mathcal T}^{\textbf{w},\circ}_{n})N_0({\mathcal T}^{\textbf{w},\circ}_{n})}\bigr]}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}.\end{align}

and thus, for any indicator function f,

(65) \begin{align}\bigl\lvert{{\mathbb E{}} f(\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n})-{\mathbb E{}} f({\mathcal T}^{\textbf{w},\circ}_{n})}\bigr\rvert&=\Bigl\lvert{{\mathbb E{}} \Bigl[{f({\mathcal T}^{\textbf{w},\circ}_{n})\Bigl({\frac{{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}-1}\Bigr)}\Bigr]}\Bigr\rvert\notag\\&\leqslant{\mathbb E{}} \Bigl\lvert{\frac{{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}-1}\Bigr\rvert.\end{align}

Hence, taking the supremum over all $f=\boldsymbol1_{A}$ ,

(66) \begin{align}d_{\textrm{TV}}(\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n},{\mathcal T}^{\textbf{w},\circ}_{n})\leqslant{\mathbb E{}} \Bigl\lvert{\frac{{N_0({\mathcal T}^{\textbf{w},\circ}_{n})}}{{\mathbb E{}} N_0({\mathcal T}^{\textbf{w},\circ}_{n})}-1}\Bigr\rvert,\end{align}

and the result follows by (63).

Lemma 8.2 implies that any result on convergence in probability or distribution for one of $\ {\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}}$ and ${\mathcal T}^{\textbf{w},\circ}_{n}$ also hold for the other.

9. Profile of conditioned modified Galton–Watson trees

We will use the following extension of Theorem 1.2 to conditioned modified Galton–Watson trees.

Theorem 9.1. Let $L_n$ be the profile of a conditioned modified Galton–Watson tree ${\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n$ of order n and assume that $\mu({\textbf{p}})=1$ , $\sigma^2({\textbf{p}})<\infty$ and $\mu({\textbf{p}}^0)<\infty$ . Then, as ${{n\to\infty}}$ ,

(67) \begin{align} n^{-1/2} L_n(x n^{1/2}) \overset{{d}}{\longrightarrow} \frac{\sigma}2L_{{\textbf{e}}}\Bigl({\frac{\sigma}2 x}\Bigr),\end{align}

in the space $C[0,\infty]$ , where $L_{{\textbf{e}}}$ is, as in Theorem 1.2, the local time of a standard Brownian excursion ${\textbf{e}}$ .

Proof. Denote the branches of ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ by ${\mathcal T}_1,\dots,{\mathcal T}_{d(o)}$ and let ${\mathcal T}_0$ be a single root. Then, regarding the branches as rooted trees, which means that their vertices have their depths shifted by 1 from the original tree,

(68) \begin{align}L_n(x)=\sum_{i=1}^{d(o)} L_{{\mathcal T}_i}(x-1)+L_{{\mathcal T}_0}(x).\end{align}

Let ${\mathcal T}_{(1)},\dots,{\mathcal T}_{({d(o)})}$ be the branches arranged in decreasing order. Lemma 4.3 shows that $|{\mathcal T}_{(1)}|=n-O_{\textrm{p}}(1)$ . Hence, (68) and the trivial estimate $0\leqslant L_T(x)\leqslant |T|$ for any T and x yield

(69) \begin{align}\bigl\lvert{L_n(x)-L_{{\mathcal T}_{(1)}}(x-1)}\bigr\rvert \leqslant\sum_{i=2}^{d(o)}|{\mathcal T}_{({i})}|+1=n-|{\mathcal T}_{(1)}|=O_{\textrm{p}}(1).\end{align}

Furthermore, conditioned on $|{\mathcal T}_{(1)}|=n-\ell$ , for any fixed $\ell$ , ${\mathcal T}_{(1)}$ has the same distribution as ${\mathcal T}^{{\textbf{p}}}_{n-\ell}$ , and thus Theorem 1.2 shows that

(70) \begin{align} (n-\ell)^{-1/2} L_{{\mathcal T}_{(1)}}(x (n-\ell)^{1/2})\overset{\textrm{d}}{\longrightarrow} \frac{\sigma}2L_\textbf{e}\Bigl({\frac{\sigma}2 x}\Bigr),\qquad\text{in }C[0,\infty],\end{align}

and it follows easily that, still conditioned,

(71) \begin{align} n^{-1/2} L_{{\mathcal T}_{(1)}}(x n^{1/2} -1)\overset{\textrm{d}}{\longrightarrow} \frac{\sigma}2L_\textbf{e}\Bigl({\frac{\sigma}2 x}\Bigr),\qquad\text{in }C[0,\infty].\end{align}

Together with (69), this shows that for every fixed $\ell$ ,

(72) \begin{align} \bigl({L_n(x)\mid |{\mathcal T}_{(1)}|=n-\ell}\bigr)\overset{\textrm{d}}{\longrightarrow} \frac{\sigma}2L_\textbf{e}\Bigl({\frac{\sigma}2 x}\Bigr),\qquad\text{in }C[0,\infty]. \end{align}

It follows that (72) holds also if we condition on $n-|{\mathcal T}_{(1)}|\leqslant K$ , for any fixed K, and then (67) follows easily from $n-|{\mathcal T}_{(1)}|=O_{\textrm{p}}(1)$ .

Recall that for conditioned Galton–Watson trees ${\mathcal T}^{{\textbf{p}}}_n$ with $\mu(\textbf{p})=1$ and $\sigma^2(\textbf{p})<\infty$ , the width divided by $\sqrt n$ converges in distribution: we have

(73) \begin{align} n^{-1/2} W({\mathcal T}^{{\textbf{p}}}_n) \overset{\textrm{d}}{\longrightarrow} \sigma W,\end{align}

for some random variable W (not depending on $\textbf{p}$ ). In fact, as noted by Drmota and Gittenberger [Reference Drmota and Gittenberger18], this is an immediate consequence of (13) and (1), with

(74) \begin{align} W\,:\!=\,\tfrac12\max_{x\geqslant0} L_\textbf{e}(x).\end{align}

It is also known that all moments converge, see [Reference Drmota and Gittenberger19] (assuming an exponential moment) and [Reference Addario-Berry, Devroye and Janson1] (in general).

The next theorem records that (73) extends to conditioned modified Galton–Watson trees, together with two partial results on moments.

Theorem 9.2. Consider a conditioned modified Galton–Watson tree ${\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n$ where $\mu({\textbf{p}})=1$ , $\sigma^2({\textbf{p}})<\infty$ and $\sigma^2({\textbf{p}}^0)<\infty$ . Then, as ${{n\to\infty}}$ ,

(75) \begin{align} n^{-1/2} W({\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n) &\overset{\textrm{d}}{\longrightarrow} \sigma W,\end{align}
(76) \begin{align} n^{-1/2} {\mathbb E{}} W({\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n) &\to \sigma {\mathbb E{}} W =\sigma\sqrt{\pi/2},\end{align}
(77) \begin{align}{\mathbb E{}} \bigl[{W({\mathcal T}^{{{\textbf{p}},{\textbf{p}}^0}}_n)^{2}}\bigr]& = O(n). \end{align}

Proof. First, (75) follows as in [Reference Drmota and Gittenberger18]: $f\to\sup f$ is a continuous functional on $C[0,\infty]$ , and thus (75) follows from (67), (13) and (74).

We next prove (77). Denote the branches of ${\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ by ${\mathcal T}_1,\dots,{\mathcal T}_{d(o)}$ . Assume $n>1$ , then the width is attained above the root, and we have, for every $i\leqslant d(o)$ ,

(78) \begin{align} W({\mathcal T}_i) \leqslant W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n) \leqslant \sum_{i=1}^{d(o)}W({\mathcal T}_i).\end{align}

Condition on d(o) and $|{\mathcal T}_1|,\dots,|{\mathcal T}_{d(o)}|$ as in Lemma 4.1. For a random variable X, denote its conditioned $L^2$ norm by

(79) \begin{align}\lVert{{X}}\rVert^{\prime}_2\,:\!=\,\bigl({{\mathbb E{}}\bigl[{ X^2\mid d(o),|{\mathcal T}_1|,\dots,|{\mathcal T}_{d(o)}|}\bigr]}\bigr)^{1/2}.\end{align}

By (78) and Minkowski’s inequality, we have

(80) \begin{align} \lVert{{W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)}}\rVert^{\prime}_2\leqslant \sum_{i=1}^{d(o)}\lVert{W({\mathcal T}_i)}\rVert^{\prime}_2.\end{align}

Furthermore, by Lemma 4.1 and [Reference Addario-Berry, Devroye and Janson1, Corollary 1.3], if $|{\mathcal T}_i|=n_i$ ,

(81) \begin{align} {\mathbb E{}} \bigl({W({\mathcal T}_i)^2 \mid d(o),|{\mathcal T}_1|,\dots,|{\mathcal T}_{d(o)}|}\bigr) = {\mathbb E{}} \bigl[{W({\mathcal T}^{{\textbf{p}}}_{n_i})^2}\bigr]\leqslant C n_i,\end{align}

and thus $\lVert{{W({\mathcal T}_i)}}\rVert^{\prime}_2 \leqslant C n_i^{1/2} = C|{\mathcal T}_i|^{1/2}$ . Hence, by (80),

(82) \begin{align} \lVert{{W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)}}\rVert^{\prime}_2\leqslant \sum_{i=1}^{d(o)} C |{\mathcal T}_i|^{1/2}\end{align}

and thus, by the Cauchy–Schwarz inequality,

(83) \begin{align}&{{\mathbb E{}}\bigl[{ W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)^2\mid d(o),|{\mathcal T}_1|,\dots,|{\mathcal T}_{d(o)}|}\bigr]}=\bigl({ \lVert{{W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)}}\rVert^{\prime}_2}\bigr)^2\notag\\&\qquad\leqslant C\Bigl({ \sum_{i=1}^{d(o)} |{\mathcal T}_i|^{1/2}}\Bigr)^2\leqslant C d(o) \sum_{i=1}^{d(o)} |{\mathcal T}_i|\leqslant C d(o) n.\end{align}

Taking the expectation yields

(84) \begin{align}{\mathbb E{}}\bigl[{ W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)^2}\bigr]\leqslant C n {\mathbb E{}}\bigl[{ d(o)}\bigr].\end{align}

Furthermore, (35) implies that for large n, ${\mathbb E{}}[{ d(o)}] \leqslant 2{\mathbb E{}}{ \widetilde D}$ , where ${\mathbb E{}}\widetilde D<\infty $ by Lemma 4.2. Thus, ${\mathbb E{}} [{d(o)}]\leqslant C$ , and (84) yields

(85) \begin{align}{\mathbb E{}}\bigl[{ W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)^2}\bigr]\leqslant C n,\end{align}

showing (77).

Finally, (77) implies that the variables on the left-hand side of (75) are uniformly integrable [Reference Gut26, Theorem 5.4.2], and thus (76) follows from (75). ${\mathbb E{}} W=\sqrt{\pi/2}$ is well known, see e.g. [Reference Biane, Pitman and Yor12].

Problem 9.3. We conjecture that under the assumptions of Theorem 9.2, ${\mathbb E{}}\bigl[{ W({\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n)^r}\bigr]=O\bigl({n^{r/2}}\bigr)$ for any $r>0$ , which implies convergence of all moments in (75), as shown for the case $\textbf{p}^0=\textbf{p}$ in [Reference Addario-Berry, Devroye and Janson1]. The proof above is easily generalized if ${\mathbb E{}} \widetilde D^{r/2}=O(1)$ , which is equivalent to $\sum_k k^{1+r/2}{p}^0_k<\infty$ , but we leave the general case as an open problem.

Problem 9.4. Is $\sigma^2(\textbf{p}^0)<\infty$ really needed in Theorem 9.2?

10. Distance profile, first step

We now turn to distance profiles. We begin with a weak version of Theorem 1.4; recall the pseudometric $\textsf{d}$ defined in (20), and (21).

Lemma 10.1. Consider a conditioned Galton–Watson tree ${\mathcal T}^{{{\textbf{p}}}}_n$ where $\mu({\textbf{p}})=1$ and $\sigma^2=\sigma^2({\textbf{p}})<\infty$ . Then, as ${{n\to\infty}}$ , for any continuous function with compact support $f\,:\,[0,\infty)\to\mathbb R$ ,

(86) \begin{align}\int_0^\infty n^{-3/2} \Lambda_{{\mathcal T}^{{{\textbf{p}}}}_n}\bigl({x n^{1/2}}\bigr)f(x) \,{d} x\overset{{d}}{\longrightarrow}\int_0^1\int_0^1 f\Bigl({\frac{2}{\sigma}{\textsf{d}}\bigl({s,t;\,{\textbf{e}}}\bigr)}\Bigr)\,{d} s\,{d} t. \end{align}

Proof. The function f is bounded, and also uniformly continuous, i.e., its modulus of continuity $\omega(\delta;\,f)$ , defined in (9), satisfies $\omega(\delta;\,f)\to0$ as $\delta\to0$ . Thus, for any rooted tree $T\in\mathfrak{T}_n$ , noting that $\Lambda_T(x)\leqslant n$ on $[\!-\!1,0]$ and using the analogue of (11) for $\Lambda$ ,

(87) \begin{align}\hskip2em&\hskip-2em\int_0^\infty n^{-3/2} \Lambda_T\bigl({x n^{1/2}}\bigr)f(x) \,\textrm{d} x= n^{-2}\int_0^\infty \Lambda_T({x})f\bigl({n^{-1/2} x}\bigr) \,\textrm{d} x\notag\\&= n^{-2}\int_{-1}^\infty f\bigl({n^{-1/2} x}\bigr)\Lambda_T({x}) \,\textrm{d} x+O\bigl({n^{-1}}\bigr)\notag\\&= n^{-2}\sum_{i=0}^\infty \int_{i-1}^{i+1}f\bigl({n^{-1/2} x}\bigr)\Lambda_T(i)\tau(x-i) \,\textrm{d} x +O\bigl({n^{-1}}\bigr)\notag\\&= n^{-2}\sum_{i=0}^\infty f\bigl({n^{-1/2} i}\bigr)\Lambda_T(i)+ O\bigl({\omega(n^{-1/2};\,f)}\bigr)+O\bigl({n^{-1}}\bigr)\notag\\&= n^{-2}\sum_{v,w\in T} f\bigl({n^{-1/2}\textsf{d}(v,w)}\bigr)+ o(1),\end{align}

where (as throughout the proof) o(1) tends to 0 as ${{n\to\infty}}$ , uniformly in $T\in\mathfrak{T}_n$ . Recall that the contour process $C_T(x)$ of T is a continuous function $C_T\,:\,[0,2n-2]\to[0,\infty)$ that describes the distance from the root to a particle that travels with speed 1 on the ‘outside’ of the tree. (Equivalently, it performs a depth first walk at integer times $0,1,\dots,2n-2$ .) For each vertex $v\neq o$ , the particle travels through the edge leading from v towards the root during two time intervals of unit length (once in each direction). Thus, as is well known,

(88) \begin{align} \int_0^{2n-2} f\bigl({n^{-1/2} C_T(x)}\bigr)\,\textrm{d} x= 2\sum_{v\neq o} f\bigl({n^{-1/2} \textsf{d}(v,o)}\bigr)+O\bigl({n\omega(n^{-1/2};\, f)}\bigr).\end{align}

We will use a bivariate version of this. It is also well known that if v(i) is the vertex visited by the particle at time i, then, for any integers $i,j\in[0,2n-2]$ ,

(89) \begin{align} \textsf{d}\bigl({v(i),v(j)}\bigr)=\textsf{d}(i,j;\,C_T),\end{align}

where the first $\textsf{d}$ is the graph distance in T, and the second is the pseudometric defined by (20) (now on the interval $[0,2n-2]$ ). Hence, the argument yielding (88) also yields

(90) \begin{align}\int_0^{2n-2}\int_0^{2n-2}f\bigl({n^{-1/2}\textsf{d}(x,y;\,C_T)}\bigr)\,\textrm{d} x\,\textrm{d} y =4 \sum_{v,w\neq o} f\bigl({n^{-1/2}\textsf{d}(v,w)}\bigr) + O\bigl({n^2\omega(n^{-1/2};\,f)}\bigr).\end{align}

We use the standard rescaling of the contour process

(91) \begin{align} \widetilde C_T(t)\,:\!=\,n^{-1/2} C_T\bigl({(2n-2)t}\bigr), \qquad t \in [0,1],\end{align}

and note that for any $g\,:\,[0,1] \rightarrow [0, \infty)$ with $g(0) = g(1) = 0$ and $c>0$ ,

(92) \begin{align} \textsf{d}(s,t;\,cg)=c\textsf{d}(s,t;\,g), \qquad s,t \in [0,1].\end{align}

Thus, by (90) and a change of variables,

(93) \begin{align}\hskip4em&\hskip-4em\int_0^{1}\int_0^{1}f\bigl({\textsf{d}(s,t;\,\widetilde C_T)}\bigr)\,\textrm{d} s\,\textrm{d} t\notag\\&=\frac{1}{(2n-2)^2}\int_0^{2n-2}\int_0^{2n-2}f\bigl({n^{-1/2}\textsf{d}(x,y;\,C_T)}\bigr)\,\textrm{d} x\,\textrm{d} y\notag\\&=\frac{1}{(n-1)^2} \sum_{v,w\neq o} f\bigl({n^{-1/2}\textsf{d}(v,w)}\bigr)+ O\bigl({\omega(n^{-1/2};\,f)}\bigr).\notag\\&=\frac{1}{n^2} \sum_{v,w\neq o} f\bigl({n^{-1/2}\textsf{d}(v,w)}\bigr) +o(1).\end{align}

Combining (87) and (93), we find

(94) \begin{align}\hskip2em&\hskip-2em\int_0^\infty n^{-3/2} \Lambda_T\bigl({x n^{1/2}}\bigr)f(x) \,\textrm{d} x=\int_0^{1}\int_0^{1}f\bigl({\textsf{d}(s,t;\,\widetilde C_T)}\bigr)\,\textrm{d} s\,\textrm{d} t+o(1).\end{align}

We apply this to $T={\mathcal T}^{{\textbf{p}}}_n$ and use the result by Aldous [Reference Aldous3, Reference Aldous4],

(95) \begin{align} \widetilde C_{{\mathcal T}^{{\textbf{p}}}_n}(t)\overset{\textrm{d}}{\longrightarrow} \frac{2}{\sigma}\textbf{e}(t),\qquad \text{in $C{[0,1]}$}.\end{align}

The functional $g\to\iint f\bigl({d(s,t;\,g)}\bigr)\,\textrm{d} s\,\textrm{d} t$ is continuous on $C{[0,1]}$ , and the result (86) follows from (94) and (95) by the continuous mapping theorem, using also (92).

11. Distance profile of unrooted trees

We continue with the distance profile, now turning to unrooted simply generated trees for a while. Throughout this section, we assume that $\textbf{w}$ is a weight sequence and that $\boldsymbol{\phi}$ and $\boldsymbol{\phi}^0$ are the weight sequences given by (49) and (50). We assume that the exponential generating function of $\textbf{w}$ has positive radius of convergence; this means that the generating function $\Phi(z)$ in (29) has positive radius of convergence, which in turn implies that there exists a probability weight sequence $\textbf{p}$ equivalent to $\boldsymbol{\phi}$ . We assume furthermore that it is possible to choose $\textbf{p}$ such that $\mu(\textbf{p})=1$ ; $\textbf{p}$ will denote this choice. (For algebraic conditions on $\Phi$ for such a $\textbf{p}$ to exist, see e.g. [Reference Janson30].)

We note that by (49)–(50), $\phi^0_k\leqslant\phi_{k-1}$ , $k\geqslant1$ . Hence, if $p_k=ab^k\phi_k$ , then $\sum_k b^k\phi^0_k<\infty$ , and it is possible to find $a_0>0$ such that $\textbf{p}^0\,:\!=\,\boldsymbol{\phi}^{0\prime}$ given by (32) also is a probability sequence; hence ${\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n={\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n$ is a modified Galton–Watson tree. Furthermore, ${p}^0_k\leqslant (a_0/a)p_k$ , and thus if $\sigma^2(\textbf{p})<\infty$ , then $\sigma^2(\textbf{p}^0)<\infty$ .

We begin with an unrooted version of Lemma 10.1.

Lemma 11.1. Let ${\textbf{w}}$ , $\boldsymbol{\phi}$ and ${\textbf{p}}$ be as above and assume $\sigma^2\,:\!=\,\sigma^2({\textbf{p}})<\infty$ . Let $\Lambda_n$ be the distance profile of the unrooted simply generated tree ${\mathcal T}^{{\textbf{w}},\circ}_{n}$ . Then, as ${{n\to\infty}}$ , for any continuous function with compact support $f\,:\,[0,\infty)\to\mathbb R$ ,

(96) \begin{align}\int_0^\infty n^{-3/2} \Lambda_{n}\bigl({x n^{1/2}}\bigr)f(x) \,{d} x\overset{{d}}{\longrightarrow}\int_0^1\int_0^1 f\Bigl({\frac{2}{\sigma}{\textsf{d}}\bigl({s,t;\,{\textbf{e}}}\bigr)}\Bigr)\,{d} s\,{d} t. \end{align}

Proof. Consider the leaf-biased random tree $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ defined in Section 8. By Lemma 8.2, we may assume ${\mathbb P{}}(\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}\neq{\mathcal T}^{\textbf{w},\circ}_{n})\to0$ and thus it suffices to show (96) with $\Lambda_{\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}}$ instead of $\Lambda_n$ . If ${\mathcal T}_{n,+}$ denotes the unique branch of $\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}$ , then, trivially,

(97) \begin{align}0\leqslant \Lambda_{\widehat{{\mathcal T}}^{\textbf{w},\circ}_{n}}(x)-\Lambda_{{\mathcal T}_{n,+}}(x) \leqslant 2n-1, \qquad x\geqslant0,\end{align}

and thus we may further reduce and replace $\Lambda_n$ in (96) by $\Lambda_{{\mathcal T}_{n,+}}$ . As shown in Section 8, ${\mathcal T}_{n,+}\overset{\textrm{d}}{=}{\mathcal T}^{{\boldsymbol{\phi}}}_{n-1}={\mathcal T}^{{\textbf{p}}}_{n-1}$ , and the result now follows from Lemma 10.1, replacing n there by $n-1$ and x by $x=(n/(n-1))^{1/2} x$ , noting that $\sup_x|f(x)-f\bigl({(n/(n-1))^{1/2} x}\bigr)|\to0$ as ${{n\to\infty}}$ .

Theorem 11.2. Let ${\textbf{w}}$ , $\boldsymbol{\phi}$ and ${\textbf{p}}$ be as above, and assume $\sigma^2\,:\!=\,\sigma^2({\textbf{p}})<\infty$ . Let $\Lambda_n$ be the distance profile of the unrooted simply generated tree ${\mathcal T}^{{\textbf{w}},\circ}_{n}$ . Then, as ${{n\to\infty}}$ ,

(98) \begin{align} n^{-3/2}\Lambda_n\bigl({x n^{1/2}}\bigr)\overset{{d}}{\longrightarrow}\frac{\sigma}2\Lambda_{{\textbf{e}}}\Bigl({\frac{\sigma}2 x}\Bigr),\end{align}

in the space $C[0,\infty]$ , where $\Lambda_{{\textbf{e}}}(x)$ is as in Theorem 1.4.

Proof. Let

(99) \begin{align} Y_n(x)\,:\!=\,n^{-3/2}\Lambda_n\bigl({x n^{1/2}}\bigr)=n^{-3/2} \Lambda_{{\mathcal T}^{\textbf{w},\circ}_{n}}\bigl({xn^{1/2}}\bigr).\end{align}

Regard $Y_n$ as a random element of $C[0,\infty]$ . Define also the mapping $\psi\,:\,C[0,\infty]\to\mathcal M([0,\infty))$ , the space of all locally finite Borel measures on $[0,\infty)$ , defined by $\psi(h)\,:\!=\,h(x)\,\textrm{d} x$ ; i.e., for $h\in C[0,\infty]$ and $f\in C[0,\infty)$ with compact support,

(100) \begin{align} \int_0^\infty f(x) \,\textrm{d} \psi(h)\,:\!=\,\int_0^\infty f(x)h(x)\,\textrm{d} x.\end{align}

In other word, $\psi(h)$ has density h.

We give $\mathcal M([0,\infty))$ the vague topology, i.e., $\nu_n\to\nu$ in $\mathcal M([0,\infty))$ if $\int f\,\textrm{d}\nu_n\to\int f\,\textrm{d}\nu$ for every $f\in C[0,\infty)$ with compact support, and note that $\mathcal M([0,\infty))$ is a Polish space, see e.g. [Reference Kallenberg32, Theorem A2.3]. Clearly, the separable Banach space $C[0,\infty]$ is also a Polish space. (Recall that a Polish space has a topology that can be defined by a complete separable metric.) It follows from the definition (100) that $\psi$ is continuous $C[0,\infty]\to\mathcal M([0,\infty))$ . Furthermore, $\psi$ is injective, since the density of a measure is a.e. uniquely determined.

We will use the alternative method of proof in [Reference Drmota17, p. 123–125], and show the following two properties:

Claim 1. The sequence $Y_n$ is tight in $C[0,\infty]$ .

Claim 2. The sequence of random measures $\psi(Y_n)$ converges in distribution in $\mathcal M([0,\infty))$ to some random measure $\zeta$ .

It then follows from [Reference Bousquet-Mélou and Janson14, Lemma 7.1] (see also [Reference Drmota17, Theorem 4.17]) that

(101) \begin{align} Y_n\overset{\textrm{d}}{\longrightarrow} Z,\qquad \text{in }C[0,\infty],\end{align}

for some random $Z\in C[0,\infty]$ such that

(102) \begin{align}\psi(Z)\overset{\textrm{d}}{=}\zeta.\end{align}

It will then be easy to complete the proof.

Proof of Claim 1: For $i=1,\dots,n$ , let ${\mathcal T}(i)$ be ${\mathcal T}^{\textbf{w},\circ}_{n}$ rooted at i. By symmetry, all ${\mathcal T}(i)$ have the same distribution; moreover, they equal in distribution $\widehat{{\mathcal T}}^{\textbf{w},\bullet}_{n}$ defined in Section 6 (which has a random root). Hence, if we order each ${\mathcal T}(i)$ randomly, we have by Section 6

(103) \begin{align} {\mathcal T}(i)\overset{\textrm{d}}{=} {\mathcal T}^{\textbf{w},*}_n ={\mathcal T}^{{\boldsymbol{\phi},\boldsymbol{\phi}^0}}_n={\mathcal T}^{{\textbf{p},\textbf{p}^0}}_n.\end{align}

By (16),

(104) \begin{align}Y_n(x)=n^{-3/2} \Lambda_{{\mathcal T}^{\textbf{w},\circ}_{n}}\bigl({xn^{1/2}}\bigr)=\frac{1}{n}\sum_{i=1}^n n^{-1/2} L_{T(i)}\bigl({xn^{1/2}}\bigr).\end{align}

Since the sequence