Hostname: page-component-848d4c4894-m9kch Total loading time: 0 Render date: 2024-05-30T09:15:23.039Z Has data issue: false hasContentIssue false

Analysis of ${\textit{d}}$-ary tree algorithms with successive interference cancellation

Published online by Cambridge University Press:  26 February 2024

Quirin Vogel*
Affiliation:
Technical University of Munich
Yash Deshpande*
Affiliation:
Technical University of Munich
Cedomir Stefanović*
Affiliation:
Aalborg University
Wolfgang Kellerer*
Affiliation:
Technical University of Munich
*
*Postal address: Department of Mathematics, School of Computation, Information and Technology, Technical University of Munich, Germany. Email address: quirin.vogel@tum.de
**Postal address: Lehrstuhl für Kommunikationsnnetze, School of Computation, Information and Technology, Technical University of Munich, Germany.
****Department of Electronic Systems, Aalborg University, 2450 København SV, Denmark. Email address: cs@es.aau.dk
**Postal address: Lehrstuhl für Kommunikationsnnetze, School of Computation, Information and Technology, Technical University of Munich, Germany.
Rights & Permissions [Opens in a new window]

Abstract

We calculate the mean throughput, number of collisions, successes, and idle slots for random tree algorithms with successive interference cancellation. Except for the case of the throughput for the binary tree, all the results are new. We furthermore disprove the claim that only the binary tree maximizes throughput. Our method works with many observables and can be used as a blueprint for further analysis.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

An important aspect of wireless communication involves multiple users utilizing a shared resource such as a wireless channel. If more than one user transmits on the same channel at a given time, their signals interfere with each other and their messages cannot be decoded. Hence, we need intelligent channel access schemes to facilitate efficient use of the wireless resources. Recently, the idea of massive Internet of things (IoT) for smart cities and smart factories has become popular [Reference Navarro-Ortiz21]. In a massive IoT, a large number of users transmit short packets to a single receiver. Moreover, the users become active randomly and hence their transmissions cannot be governed by a predetermined schedule. For massive IoT scenarios, distributed random access (RA) schemes are better suited as they provide minimal signalling and control overhead [Reference Wu26].

Tree algorithms are a class of distributed RA schemes. If the receiver cannot decode a message from a user due to interference from other users, then all interfering users retransmit their message at a random time in the future as selected by the tree algorithm until each user can transmit in a unique time period without interfering with any other user. Note that users can only communicate with the receiver and not among themselves.

Tree algorithms solve the problem by iteratively splitting users into different groups until each group has only one user. This repeated splitting can be described by a tree, hence the name tree algorithm. Each group then transmits in a time slot determined by the algorithm. A metric of a tree algorithm’s efficiency is the ratio between the number of users n and the time it takes until all users have successfully transmitted their packet, called the throughput. Yu and Giannakis introduced the binary tree algorithm with successive interference cancellation (SICTA) in [Reference Yu and Giannakis27]. SICTA extends previous tree algorithms and offers high throughput. The key idea of SICTA is to successively remove user packets along the tree once they become decoded; this way, some of the previous groups may become reduced to having just a single user, propelling a new round of decoding and successive interference cancellation (SIC). In this work, we analyze the throughput of SICTA, as well as the mean number of collisions, successes, and idle slots, for the general version of the algorithm in which the users randomly split into d groups, $d \geq 2$ .

The rest of the paper is organized as follows. In Section 1.1, we give a brief overview of the novel aspects of our work. In Section 2, we first give a brief mathematical description of the model, for readers unfamiliar with SICTA. We then state the main results. In Section 3 we provide background on tree-splitting algorithms and mention related work (see Section 3.3). In Section 4, we prove our results, i.e. we derive the correct expression for the collision resolution interval (CRI) length conditioned on the number n of initially collided users for the d-ary SICTA. We then give asymptotic expressions for the throughput, number of collision slots, and number of immediately decodable slots (henceforth referred to as successes) when the number of users n tends to infinity. We also derive results on the mean delay experienced by a user.

1.1. Overview of our contributions

Compared to other tree algorithms such as standard tree algorithm (STA) and modified tree algorithm (MTA), studying the properties of SICTA requires a more careful approach. Indeed, SIC (see Section 3) introduces further dependencies into the model that are non-trivial, especially in the case $d\ge 3$ . These subtle dependencies have caused errors in the literature [Reference Yu and Giannakis27], which were identified in [Reference Deshpande, Stefanović, Gürsu and Kellerer6]. However, [Reference Deshpande, Stefanović, Gürsu and Kellerer6] does not include the formal analysis but provides simulation results indicating the value of the final result. In this work, by adding another coordinate (the split number $\textrm{M}\in\{1,\ldots,d\}$ ), we are able to reformulate the model as a Markovian branching process and prove the correct results.

The analysis of d-ary tree algorithms is mainly (despite a number of graph-theoretic approaches having been carried out; see [Reference Evseev and Turlikov8] and references therein) done by combining generating functions with tools from complex analysis; see [Reference Fayolle, Flajolet and Hofri9, Reference Massey17, Reference Mathys and Flajolet19, Reference Molle and Shih20, Reference Yu and Giannakis27], for example. With the Markovian structure at hand, we can use the aforementioned tools to derive closed-form expressions for many observables of the process. Arguably the most important characteristic of a tree algorithm is the CRI length, denoted by $(l_n)_{n\ge 0}$ , conditioned on the number of packets in the initial collision n. We analyze the law of $l_n$ by deriving a functional equation, which the moment-generating function for $l_n$ solves, see Proposition 1. To obtain an explicit formula for the mean $L_n=\mathbb{E}[l_n]$ , we differentiate the moment-generating function and solve the ensuing functional equation. This method also works for the variance of $l_n$ , as well as for the higher moments. We also give the functional relations for the moment-generating functions of the number of collisions and successes occurring during SICTA and derive explicit formulas for their means. We stress that our method works for a large class of observables, although the solution of the functional equations must be checked on a case-by-case basis; see the proof of Corollary 1.

Using the explicit formula for $L_n$ , we leverage asymptotic analysis to extract the leading term. Contrary to many tree models (see [Reference Drmota5] for an overview), the mean CRI length $L_n$ does not converge when divided by n but instead has small, non-vanishing fluctuations. Asymptotic analysis was done in detail for STA in the case of equal splitting in [Reference Mathys and Flajolet19], and for $d=2$ and biased splitting in [Reference Fayolle, Flajolet and Hofri9]. To derive the leading term from the explicit formulas for the mean throughput, number of collisions, and successes, we developed a more robust expansion that works for both fair and biased splitting and any $d\ge 2$ . We achieve this by using some explicit identities derived from the binomial series together with the geometric sum formula. This can then be combined with use of the residue theorem as in [Reference Mathys and Flajolet19].

We furthermore calculate the extremal points for the leading order of the observables. We verify the conjecture that the maximal throughput of $\log\!(2)$ can be achieved for any d-ary SICTA (with $d\ge 2$ ), given suitable splitting probabilities. This conjecture was formulated in [Reference Deshpande, Stefanović, Gürsu and Kellerer6], based on numerical simulations.

We also numerically simulate the minimal collision rate subject to a minimum-throughput constraint. As the number of collisions corresponds to the number of signals stored in the access point, this result helps to gauge memory requirements. We show that a small reduction in throughput allows for a (relatively) large reduction in collisions. This is of interest when the arrival rate is not too close to the critical stability threshold, as one is able to reduce collisions without affecting the mean throughput much.

In the final section of the paper, we give a recursive relation which allows for the calculation of the moment-generating function of $l_n$ up to arbitrary degree: see Proposition 3. We also solve a functional equation for the mean delay for SICTA in steady state: see Proposition 4.

2. Results

2.1. A mathematical model for SICTA

In this section we give an abridged description of SICTA from the mathematical point of view, in order to be able to state the main results rigorously. For readers familiar with SICTA this can be skipped on first reading. For a full description of the algorithm we refer the reader to Section 3.

The underlying objects of our study are d-ary ( $d\ge 2$ ) labelled trees with random, integer-valued labels. The label of the root is a fixed, non-random number in $\mathbb{N}_0$ . The nodes with label $n\in\{0,1\}$ have no children. Nodes with label $n>1$ have children $(c_1,\ldots,c_d)$ . The labels of the children are distributed according to the multinomial distribution $\textrm{Mult}(n,p)$ , where $p\in (0,1)^d$ is a vector of splitting probabilities, i.e. $\sum_{j=1}^dp_j=1$ . We keep p fixed throughout the entire tree. As $0<p_j<1$ for all $j\in\{1,\ldots,d\}$ , the resulting tree will almost surely be finite. Labelling only depends on the parent node, and hence the resulting tree has a Markovian structure.

For the STA, the CRI length $l_n$ is defined as the total number of nodes in the tree. However, for SICTA certain nodes are skipped: if the sum of the labels to the left of a node is greater than or equal to the label of the parent minus 1, this node will not be counted; see Section 3 for more details and a justification of this. We also refer the reader to Figure 1 for an example. A definition of the CRI length $l_n$ for SICTA is as follows: given a fixed node with label $n\ge 2$ , we denote the last non-skipped slot by M, which is defined as

(1) \begin{equation} \textrm{M} = \inf\Bigg\{k\in\{1,\ldots,d\} \colon \sum_{j=1}^k {I_j} \ge n-1\Bigg\} ,\end{equation}

where $\{I_j\}_{j\in\{1,\ldots,d\}}$ are the labels of the children $(c_1,\ldots,c_d)$ of the node. As $\{I_j\}_{j\in\{1,\ldots,d\}}$ are $\textrm{Mult}(n,p)$ distributed, their sum equals n and hence M is well defined. A recursive definition of the $l_n$ is then given by

(2) \begin{equation} l_n = \begin{cases} 1 & \text{if } n=0,1 , \\[5pt] \mathbf{1}\{\textrm{M}<d\} + \sum_{j=1}^\textrm{M} l_{I_j} & \text{if } n\ge 2 ; \end{cases}\end{equation}

see Section 3.2 for a derivation.

Figure 1. Illustration of the ternary ( $d=3$ ) tree algorithm. The number outside each node represents the slot number. The number inside each node in the tree represents the number of users transmitting in that slot. Slots 5, 8, 9, and 10 will be skipped in the SICTA.

2.2. Main results

For $k\in\{0,\ldots,d-1\}$ , we write $\overline{\textrm{F}}(k)=\sum_{j=k+1}^dp_k$ .

Theorem 1. For any $d\ge 2$ and any probability vector $p\in (0,1)^d$ with $\sum_{j=1}^d p_j=1$ :

  1. (i) For $n\ge 1$ ,

    (3) \begin{equation} \mathbb{E}[l_n] = L_n = 1 + \sum_{i=2}^n\binom{n}{i} \frac{(-1)^{i}(i-1)\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)^i}{1-\sum_{j=1}^dp_j^i} . \end{equation}
  2. (ii) As $n\to\infty$ ,

    (4) \begin{equation} \frac{L_n}{n} = \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^dp_j\log p_j} + g_1(n) + o(1) , \end{equation}
    where $g_1(n)$ is given in (39). Furthermore, if (28) has no positive integer solution, then $g_1(n)=0$ . The term $g_1(n)$ is usually very small but has a lengthy expression, we have hence decided to delay its definition until later.
  3. (iii) The first term on the right-hand side of (4) is minimized for $p=p^\textrm{bi}\in (0,1)^d$ with $p_j=2^{-\min\{j,d-1\}}$ , $j\in\{1,\ldots,d\}$ . For $p^\textrm{bi}$ ,

    \begin{equation*} \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^dp_j^{\textrm{bi}}\log p_j^{\textrm{bi}}} = \frac{1}{\log\!(2)} . \end{equation*}
    Furthermore, for $p^\textrm{bi}$ , $g_1(n)$ is bounded between $10^{-3}$ and $10^{-6}$ .

The proof of the first statement in Theorem 1 is given in Corollary 1, that of the second statement in Proposition 2, and the last statement in Lemma 3.

Remark 1. In applications, the important characteristic of a collision-resolution protocol (CRP) is given by its asymptotic throughput, which is given by $\lim_{n\to\infty}{n}/{L_n}$ . However, it is more convenient to work with $L_n/n$ from a mathematical perspective. Result-wise, there is no difference, as $L_n>0$ for all n.

We summarize the results for other important observables of the SICTA process in Table 1. We refer the reader to (22) and (24) for the formulas for the collisions and successes; see also Section 4. The proofs are very similar to those for the throughput, apart from the asymptotic leading term for the number of successes; see Lemma 2. Furthermore, we obtain a number of results regarding the mean delay of SICTA in steady state. We state them in Section 4.5, as they require more technical background.

Table 1. Summarizing the results for different observables of SICTA. See Section 4 for more details.

3. Background

3.1. Tree algorithms

The first tree algorithm was introduced in [Reference Capetanakis3]; it is also known as the Capetanakis–Tsybakov–Mikhailov-type CRP, also known as the STA. The protocol addresses the classical RA problem where several users must transmit packets to an access point (AP) over a time-slotted shared multiple access channel with broadcast feedback. The most basic form of the algorithm is STA, which proceeds as follows. Assume that n packets are transmitted by n different users in a given slot.

  • If $n = 0$ , then the slot is idle.

  • If $n = 1$ , then there is only one packet in the slot (also called a singleton), and the packet can be successfully decoded.

  • If $n > 1$ , the signals of n different transmissions interfere with each other, and no packet can be decoded. This scenario is called a collision. The users must retransmit their packets according to the CRP.

Collision resolution protocol

At the end of every slot, the AP broadcasts the outcome of the slot, i.e., idle (0), success (1), or collision (e, where e stands for error), to all the users in the network. If the feedback is a collision, the n users independently split into d groups. The probability that a user joins group j is $p_j$ where $j \in \{1,\ldots,d\}$ , $d\ge 2$ , $p_j \in (0,1)$ , and $\sum_{j=1}^d p_j=1$ . In the next slot, all the users who chose the first group ( $j=1$ ) retransmit their packets. If this results in a collision once again, then the process continues recursively. Users who have chosen the $(j>1)$ th group observe the feedback. They wait until all users in the $(j-1)$ th group successfully transmit their packets to the AP. We can represent the progression of the CRP in terms of d-ary trees as shown in Figure 1. Here, we show an example with the initial number of collided users $n=4$ and $d=3$ . Each node on the tree represents a slot. The number inside the node shows the number of users that transmit in a given slot. The slot number is shown outside the node. After a collision node, the first group branches to the left of the tree, the second group branches in the middle, and the third group branches to the right. The number of slots needed from the first collision until the CRP is complete is known as the CRI.

The main performance parameter for tree algorithms is conditional throughput, defined as the ratio $n/L_n$ of the number of users n and the expected total number of slots in a CRI $L_n$ . In the example from Figure 1, it is $0.4$ packets/slot. Furthermore, the asymptotic throughput (as $n \rightarrow \infty$ ) is important for knowing the algorithm’s maximum stable throughput (MST). The MST gives the stability of the RA scheme for a given arrival rate of users, $\lambda\in \mathbb{R}^{+}$ . Users arrive according to a Poisson process with intensity $\lambda$ .

For example, the stability of the gated RA scheme is given by

(5) \begin{equation} \text{MST} = \frac{1}{\limsup_{n \rightarrow \infty} L_n / n} \geq \lambda . \end{equation}

We analyze the case of SICTA with a Poisson arrival rate $\lambda>0$ of new packets in Section 4.5 for $\lambda<\text{MST}$ .

3.2. Successive interference cancellation

In STA, collision signals are discarded at the receiver. A new method where the receiver saves collision signals and tries to resolve more packets per slot was introduced in [Reference Yu and Giannakis27]. Here, the receiver subtracts the signals of successful packets from the collision signals. This process is known as SIC. Let $Y_{s}$ be the signal of slot number s, and $X_{i}$ be the signal of the packet of user i. In the example from Figure 1, the receiver will save $Y_{1}$ and $Y_{2}$ . In an interference-limited channel, i.e. one for which noise can be effectively neglected, the received signal is the sum of all packets transmitted in that slot, $Y_{1} = X_{1} + X_{2} + X_{3} + X_{4}$ and $Y_{2} = X_{1} + X_{2} + X_{3}$ . We will keep the same slot indices as for STA in the diagram for legibility. Since all the users are treated the same, we assume that the first user to be resolved is user 1, then user 2, and so on until user 4. In slot 6, the receiver gets $Y_{6} = X_{1}$ . Since there is no interference in this slot, the receiver can decode the packet. It is then able to remove $X_{1}$ from $Y_{1}$ and $Y_{2}$ . Similarly, after slot 7 the receiver can remove $X_{2}$ from $Y_{1}$ and $Y_{2}$ . After removing $X_{1}$ and $X_{2}$ , only $X_{3}$ remains in $Y_{2}$ . Thus the packet from user 3 can be decoded. The receiver can then proceed to remove $X_4$ from $Y_{1}$ and decode $X_{2}$ without the need for user 4 to have transmitted a packet after the first slot. In this manner, the receiver can decode two packets after slot 7, resulting in a shorter CRI. We can easily see from the diagram that if all the signals from a particular node in the tree are removed, then all the remaining children of that node can be skipped. Another advantage of SICTA is the knowledge that the rightmost branch of the tree can always be skipped. If the algorithm reaches the rightmost branch of a node and still has not decoded all the signals from that node, this rightmost branch will be a definite collision and can hence be skipped.

The asymptotic throughput of SICTA was shown (incorrectly) to be $({\ln d})/({d-1})$ in [Reference Yu and Giannakis27], achieved for fair splitting. Thus, SICTA with fair splitting was thought to be the only configuration that achieves the optimal asymptotic throughput of $\ln{2}$ packets/slot. However, a premise in their analysis for $d > 2$ was shown to be wrong in [Reference Deshpande, Stefanović, Gürsu and Kellerer6]. In [Reference Yu and Giannakis27], it was assumed that only the rightmost branch can be skipped. It fails to consider a scenario for $d > 2$ where more than one child node can be skipped when all the signals in the parent node are resolved. In the example from Figure 1, [Reference Yu and Giannakis27] failed to consider that slot 9 would be skipped after all the signals in $Y_{1}$ are decoded after slot 7. The correction paper [Reference Deshpande, Stefanović, Gürsu and Kellerer6] did not provide the formal analysis but merely pointed out the mistake from [Reference Yu and Giannakis27]. However, it did provide simulation results indicating that a special biased distribution of splitting probabilities, where

\begin{equation*} p_{j} = \begin{cases} 0.5^{j} & j \in \{1,\ldots,d-1\}, \\[5pt] 0.5^{d-1} & j = d, \end{cases}\end{equation*}

achieved a throughput of $\log\!(2)\approx0.693$ packets/slot for all values of d. In this work, we formally prove this indication to be correct.

3.3. Related work

As mentioned before, tree algorithms were introduced by [Reference Capetanakis3]. A number of analytical results are due to Flajolet and Mathys, see [Reference Fayolle, Flajolet and Hofri9, Reference Mathys18, Reference Mathys and Flajolet19]. Delay analysis was done in [Reference Molle and Shih20]. Yu and Giannakis introduced SICTA in [Reference Yu and Giannakis27]. There have been several publications regarding SICTA; for example, in [Reference Andreev, Pustovalov and Turlikov1, Reference Peeters and Van Houdt23] variants of SICTA are considered and in [Reference Stefanović, Deshpande, Gürsu and Kellerer24, Reference Stefanović, Gürsu, Deshpande and Kellerer25] the case where $K>1$ packets can be decoded in each step (multi-packet reception) is examined. The case of windowed and free access was studied in [Reference Peeters and Van Houdt22]. Analysis of the depth of the resulting tree was carried out in [Reference Holmgren11, Reference Janson and Szpankowski12]. Recently, large-deviation analysis was applied to random access algorithms, see [Reference König and Kwofie13, Reference König and Shafigh15]. In these articles, the authors estimate the probability of rare events, such as large throughput deviations, from their expected mean.

4. Analysis

4.1. Derivation of the functional equations

Recall that given a vector of probabilities $p=(p_1,\ldots,p_d)$ , at each collision each user independently chooses a slot $j\in\{1,\ldots,d\}$ with probability $p_j$ . Let $I_j$ denote the number of users who have chosen the jth slot.

For a collision of n packets, we recall that the last non-skipped slot M for SICTA is defined in (1). The evolution of the CRI length $l_n$ of n collided users is then given by (2), which can be seen as follows: since the remaining slots $\{{I_{\textrm{M}+1}},\ldots,{I_d}\}$ hold at most one packet, they can be decoded from the original signal minus the decoded signals; see also [Reference Deshpande, Stefanović, Gürsu and Kellerer6]. Furthermore, the last slot can always be skipped, as it is the difference between the initial signal and the signals to the left.

Our first result is a functional equation for the moment-generating function for $l_n$ . For this, recall that

\begin{equation*} \mathbb{E}[z^{l_n}] = \sum_{k\ge 0}z^k\mathbb{P}(l_n=k) ,\end{equation*}

where we interpret this as a formal power series (see [Reference Drmota5, Chapter 2]) outside its region of convergence. It satisfies

\begin{equation*} \frac{\textrm{d}}{\textrm{d}z}\mathbb{E}[z^{l_n}] = \sum_{k\ge 1}kz^{k-1}\mathbb{P}(l_n=k) .\end{equation*}

Evaluating the derivative at $z=1$ gives the mean,

(6) \begin{equation} \frac{\textrm{d}}{\textrm{d}z}\mathbb{E}[z^{l_n}]\bigg|_{z=1} = \sum_{k\ge 1}k\mathbb{P}(l_n=k) = \mathbb{E}[l_n] .\end{equation}

Proposition 1. Define for $x,z\in \mathbb{C}$ the formal power series

\begin{equation*} \widetilde{Q}(x,z) = \sum_{n\ge 0}\frac{x^n}{n!}\mathbb{E}[z^{l_n}] . \end{equation*}

Then

(7) \begin{equation} \widetilde{Q}(x,z) = \prod_{i=1}^d \widetilde{Q}(xp_i,z) + (z-z^2)\sum_{k=0}^{d-2}(1+\overline{\textrm{F}}(k)x)\prod_{i=1}^k\widetilde{Q}(xp_i,z) , \end{equation}

where $\overline{\textrm{F}}(k)=\sum_{j=k+1}^d p_k$ . Furthermore, there exists $\delta>0$ such that $\widetilde{Q}(x,z)$ converges absolutely for all $x,z\in\mathbb{C}$ with $\|z| < 1 + \delta$ .

Before embarking on the proof, we remark that in the literature, see for example [Reference Yu and Giannakis27], researchers work with $Q(x,z)=\textrm{e}^{-x}\widetilde{Q}(x,z)$ . As it simplifies the notation, we work with $\widetilde{Q}(x,z)$ for now and use Q(x, z) in the later parts of the article. From Proposition 1, we can obtain closed formulas for the mean, variance, and higher-order terms. We apply this for the mean. The proof of Proposition 1 is given after the proof of Corollary 1.

Corollary 1. For all $n\ge 0$ ,

(8) \begin{equation} L_n=\mathbb{E}[l_n] = 1 + \sum_{i=2}^n\binom{n}{i} \frac{(-1)^i(i-1)\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)^i}{1-\sum_{j=1}^dp_j^i} . \end{equation}

Proof. Note that, by definition, $\widetilde{Q}(x,1)=\textrm{e}^x$ . Set $K(x)= ({\textrm{d}\widetilde{Q}}/{\textrm{d}z})(x,1)$ , which exists as $z=1$ is an inner point in the region of convergence of $\widetilde{Q}(x,z)$ , and is analytic. By (taking the derivative of) (7), we have

(9) \begin{equation} K(x) = \sum_{i=1}^d K(p_ix)\textrm{e}^{(1-p_i)x} - \sum_{k=0}^{d-2}(1+\overline{\textrm{F}}(k)x)\textrm{e}^{x\sum_{j=1}^kp_k} . \end{equation}

Set $Q(x,z)=\textrm{e}^{-x}\widetilde{Q}(x,z)$ . Define the Poisson generating function L(x) of $L_n$ as

(10) \begin{equation} L(x) = \textrm{e}^{-x}\sum_{n\ge 0}\frac{x^n}{n!}L_n =\frac{\textrm{d}Q}{\textrm{d}z}(x,1) = \textrm{e}^{-x}K(x), \end{equation}

where the penultimate equality holds on account of (6). Equation (9) now yields

\begin{equation*} L(x) = \sum_{i=1}^d L(p_ix) - \sum_{k=0}^{d-2}(1+\overline{\textrm{F}}(k)x)\textrm{e}^{-\overline{\textrm{F}}(k)x} . \end{equation*}

Using the expansion $L(x)=\sum_{n\ge 0}\alpha_n x^n$ gives, by comparing coefficients, for $n \ge 2$ ,

\begin{equation*} \alpha_n = \alpha_n\sum_{i=1}^dp_i^n + \frac{1}{n!}{(-1)^n(n-1)\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)^n}. \end{equation*}

Hence,

(11) \begin{equation} \alpha_n = \frac{1}{n!}\frac{(-1)^n(n-1)\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)^n}{1-\sum_{i=1}^dp_i^n}. \end{equation}

Noting that by (10) we have $\textrm{e}^{-x}\sum_{n\ge 0}x^n L_n/n! = \sum_{n\ge 0}x^n\alpha_n$ , by comparing coefficients again we then have

\begin{equation*} L_n = \sum_{i=0}^n\frac{n!}{(n-i)!}\alpha_i . \end{equation*}

By (2), we have $\alpha_0=1-\alpha_1=1$ . Hence, we obtain (8).

Remark 2. Note that for $d=2$ we obtain the same result as [Reference Yu and Giannakis27, (30)], as $\overline{\textrm{F}}(0)=1$ . Also note that by taking higher-order derivatives (with respect to z) in (7) we can obtain closed formulas for the variance of $l_n$ as well as higher moments. We leave that to the reader. The proof of Corollary 1 establishes the first claim in Theorem 1.

We now proceed to the proof of Proposition 1.

Proof of Proposition 1. The proof is split into three parts. We first use Lemma 4 to determine the radius of convergence. The main part of the proof consists of deriving a recursive formula for $\mathbb{E}[z^{l_n}]$ , where we need to do a case distinction for the value of M. Using the recursive expressions in (14) and (16) for $Q_n(z)$ , we then sum over all n, to obtain a functional equation for $\widetilde{Q}$ , making use of cancellation effects along the way.

By Lemma 4, there exist $C,\delta>0$ such that, for all z with $|z| < 1 + \delta$ , we have $|\mathbb{E}[z^{l_n}]| \le C^n|z|^n$ . This allows us to bound the moment-generating function

\begin{equation*} |\widetilde{Q}(x,z)| \le \sum_{n\ge 0}\frac{C|xz|^n}{n!} , \end{equation*}

which converges. The moment-generating function $Q_n(z)$ is defined as

\begin{equation*} Q_n(z) = \mathbb{E}[z^{l_n}] = \sum_{k=1}^d\mathbb{E}[z^{l_n}, \textrm{M}=k] \qquad \text{for } z \in \mathbb{C}. \end{equation*}

Note that for $k<d$ we can split

\begin{equation*} \mathbb{E}[z^{l_n},\textrm{M}=k] = \mathbb{E}\Bigg[z^{l_n},\textrm{M}=k,\sum_{j=1}^\textrm{M}{I_j}=n-1\Bigg] + \mathbb{E}\Bigg[z^{l_n},\textrm{M}=k,\sum_{j=1}^\textrm{M}{I_j}=n\Bigg] , \end{equation*}

while for $k=d$ we have $\{\textrm{M}=d\}=\big\{\textrm{M}=d,\sum_{j=1}^\textrm{M}{I_j}=n\big\}$ , and hence

\begin{equation*} \mathbb{E}[z^{l_n},\textrm{M}=d] = \mathbb{E}\Bigg[z^{l_n},\textrm{M}=d,\sum_{j=1}^\textrm{M}{I_j}=n\Bigg]. \end{equation*}

In order to facilitate the analysis, we set

\begin{equation*} \mathfrak{P}_n^{(d)} = \Bigg\{\mu\in\mathbb{N}^d_0 \colon \sum_{k=1}^d\mu_k=n\Bigg\}, \qquad \binom{n}{\mu}=\binom{n}{\mu_1,\ldots,\mu_d}=\frac{n!}{\mu_1!\cdots\mu_d!} . \end{equation*}

Given a probability vector $p=(p_1,\ldots,p_d)$ , we also introduce

(12) \begin{equation} p(\mu) = \prod_{j=1}^d p_j^{\mu_j} \quad \text{for }\mu\in\mathbb{N}^d . \end{equation}

We now now do a case distinction with respect to the value of $\sum_{j=1}^\textrm{M}{I_j}$ .

For the case $\sum_{j=1}^\textrm{M}{I_j}=n$ , if $\textrm{M}=k$ and $\sum_{j=1}^k {I_j}=n$ , and then ${I_k}$ cannot be zero or one, because otherwise M would be at most $k-1$ . Hence, for $k\le d$ , we expand using (2) and the Markov property:

(13) \begin{equation} \mathbb{E}\Bigg[z^{l_n}, \textrm{M}=k,\sum_{j=1}^\textrm{M} {I_j}=n\Bigg] = \sum_{\mu\in\mathfrak{P}_n^{(k)}}\binom{n}{\mu}p(\mu)\mathbf{1}\{\mu_k>1\}z^{\mathbf{1}\{k<d\}}\prod_{j=1}^kQ_{\mu_j}(z). \end{equation}

Note that $\mathbf{1}\{\mu_k>1\}$ can be written as $1-\mathbf{1}\{\mu_k=0\}-\mathbf{1}\{\mu_k=1\}$ , and furthermore that $\big\{\mu\in\mathfrak{P}_{n}^{(k)} \colon \mu_k=0\big\}$ is isomorphic to $\mathfrak{P}_{n}^{(k-1)}$ , and similarly $\big\{\mu\in\mathfrak{P}_{n}^{(k)} \colon \mu_k=1\big\}$ is isomorphic to $\mathfrak{P}_{n-1}^{(k-1)}$ . With this in mind, we rewrite (13) as

(14) \begin{align} & \sum_{\mu\in\mathfrak{P}_n^{(k)}}\binom{n}{\mu}p(\mu)\mathbf{1}\{\mu_k>1\}z^{\mathbf{1}\{k<d\}} \prod_{j=1}^kQ_{\mu_j}(z) \nonumber \\[5pt] & \quad = \sum_{\mu\in\mathfrak{P}_n^{(k)}}\binom{n}{\mu}p(\mu)z^{\mathbf{1}\{k<d\}}\prod_{j=1}^kQ_{\mu_j}(z) - \sum_{\mu\in\mathfrak{P}_n^{(k-1)}}\binom{n}{\mu}p(\mu)z^{1+\mathbf{1}\{k<d\}}\prod_{j=1}^{k-1}Q_{\mu_j}(z) \nonumber \\[5pt] & \qquad - p_k\sum_{\mu\in\mathfrak{P}_{n-1}^{(k-1)}}\binom{n}{\mu}p(\mu)z^{1+\mathbf{1}\{k<d\}} \prod_{j=1}^{k-1}Q_{\mu_j}(z). \end{align}

Note that the cases $\{\mu_k=0\}$ and $\{\mu_k=1\}$ give us an extra factor of z, as $Q_0(z)=Q_1(z)=z$ . The case $\{\mu_k=1\}$ gives an extra factor of $p_k^{\mu_k}=p_k$ .

For the case $\sum_{j=1}^\textrm{M}{I_j}=n-1$ , note that this implies $k<d$ given $\textrm{M}=k$ . Write $\textrm{F}(k)=\sum_{j=1}^k p_j$ and $\overline{\textrm{F}}(k)=1-\textrm{F}(k)$ for the cumulative distribution function induced by p. We obtain

(15) \begin{equation} \mathbb{E}\Bigg[z^{l_n}, \textrm{M}=k,\sum_{j=1}^\textrm{M} {I_j}=n-1\Bigg] = nz\overline{\textrm{F}}(k)\sum_{\mu\in\mathfrak{P}_{n-1}^{(k)}}\binom{n-1}{\mu}p(\mu)\mathbf{1}\{\mu_k>0\} \prod_{j=1}^kQ_{\mu_j}(z). \end{equation}

Indeed, if $\sum_{j=1}^\textrm{M} {I_j}=n-1$ , we have n choices to select one packet and place it on the slots $\{\textrm{M}+1,\ldots, d\}$ . Summing over all possible slots gives us a probability of $p_{k+1}+\cdots+p_d=\overline{\textrm{F}}(k)$ . We have $n-1$ packages left to distribute amongst the k slots, which gives the multinomial coefficient.

In the same way as (14), we expand (15) as

(16) \begin{multline} nz\overline{\textrm{F}}(k)\sum_{\mu\in\mathfrak{P}_{n-1}^{(k)}}\binom{n-1}{\mu}p(\mu)\mathbf{1}\{\mu_k>0\} \prod_{j=1}^kQ_{\mu_j}(z) \\[5pt] = nz\overline{\textrm{F}}(k)\Bigg(\sum_{\mu\in\mathfrak{P}_{n-1}^{(k)}}\binom{n-1}{\mu}p(\mu) \prod_{j=1}^kQ_{\mu_j}(z) - z\!\!\!\sum_{\mu\in\mathfrak{P}_{n-1}^{(k-1)}}\!\binom{n-1}{\mu}p(\mu)\prod_{j=1}^{k-1}Q_{\mu_j}(z)\Bigg), \end{multline}

where we have used $\mathbf{1}\{\mu_k>0\}=1-\mathbf{1}\{\mu_k=0\}$ and that $\big\{\mu\in\mathfrak{P}_{n-1}^{(k)}\colon\mu_k=0\big\}$ is isomorphic to $\mathfrak{P}_{n-1}^{(k-1)}$ .

Having the two recursive expressions (14) and (16) for $Q_n(z)$ at hand, the next step of the proof consists of summing over $n\ge 0$ and noticing cancellation.

On account of (2), by recalling that $l_0=l_1=1$ we have

\begin{equation*} \widetilde{Q}(x,z) = \sum_{n\ge 0}\frac{x^n}{n!}\mathbb{E}[z^{l_n}] = (1+x)z + \sum_{n\ge 2}\sum_{k=1}^d\frac{x^n}{n!}\mathbb{E}[z^{l_n},\textrm{M}=k]. \end{equation*}

We first examine the case $\textrm{M}=1$ , as it is a bit different from the rest. We have, for $n\ge 2$ ,

(17) \begin{equation} \mathbb{E}[z^{l_n},\textrm{M}=1] = p_1^nzQ_n(z) + np_1^{n-1}\overline{\textrm{F}}(1)zQ_{n-1}(z), \end{equation}

as for $\textrm{M}=1$ either n or $n-1$ packets must have picked the first slot. The first case has a probability of $p_1^n$ and the second of $np_1^{n-1}(1-p_1)=np_1^{n-1}\overline{\textrm{F}}(1)$ . Recall that $p_1+\overline{\textrm{F}}(1)=1$ . We hence get

\begin{align*} \sum_{n\ge 2}\frac{x^n}{n!}(p_1^nQ_n(z)+np_1^{n-1}\overline{\textrm{F}}(1)Q_{n-1}(z)) & = {\widetilde{Q}(p_1x,z) - z - zp_1x + x\overline{\textrm{F}}(1)[\widetilde{Q}(p_1x,z)-z]} \\[5pt] & = \widetilde{Q}(p_1x,z)(1+\overline{\textrm{F}}(1)x)-z(1+x) \end{align*}

by reindexing. Consider the first summand on the left-hand side of this equation. By employing an index shift, we obtain

\begin{equation*} \sum_{n\ge 2}\frac{x^n}{n!}p_1^nQ_n(z) = \Bigg(\sum_{n\ge 0}\frac{(xp_1)^n}{n!}Q_n(z)\Bigg) - z - zp_1x = \widetilde{Q}(p_1x)-z(1+p_1x). \end{equation*}

The second summand follows in the same fashion.

Now fix $k\in\{2,\ldots,d\}$ and consider the case $\textrm{M}=k$ . We begin with the case $\sum_{j=1}^k {I_j}=n$ , which yields, for $k\le d$ ,

\begin{equation*} \sum_{n\ge 2}\frac{x^n}{n!}\mathbb{E}\Bigg[z^{l_n},\textrm{M}=k,\sum_{j=1}^k{I_j}=n\Bigg] = \sum_{n\ge 0}\frac{x^n}{n!}\mathbb{E}\Bigg[z^{l_n},\textrm{M}=k,\sum_{j=1}^k{I_j}=n\Bigg], \end{equation*}

as, for $\textrm{M}\ge 2$ , we need to have $n\ge 2$ . By (13) and (14),

(18) \begin{align} \sum_{n\ge 2}\frac{x^n}{n!}\mathbb{E}\Bigg[z^{l_n}, \textrm{M}=k,\sum_{j=1}^k {I_j}=n\Bigg] & = z^{\mathbf{1}\{k<d\}}\prod_{j=1}^k\widetilde{Q}(p_jx,z) \nonumber \\[5pt] & \quad - z^{1+\mathbf{1}\{k<d\}}(1+p_kx)\prod_{j=1}^{k-1}\widetilde{Q}(p_jx,z), \end{align}

which we illustrate with the last term in (14): recall that for non-negative sequences $\{a_n^{(i)}\}_{i,n\ge 0}$ ,

(19) \begin{equation} \prod_{i=1}^k\Bigg(\sum_{n\ge 0}a_n^{(i)}\Bigg) = \sum_{n\ge 0}\sum_{\mu\in\mathfrak{P}_n^{(k)}}\prod_{i=1}^k a_{\mu_i}^{(i)}. \end{equation}

Hence, employing an index shift, we obtain

The other terms in (18) follow similarly. Summing the right-hand side of (18) from $k=2$ to d gives

(20) \begin{equation} \prod_{j=1}^d\widetilde{Q}(p_jx,z) - z(1+p_kx)\prod_{j=1}^{d-1}\widetilde{Q}(p_jx,z) + \sum_{k=2}^{d-1}z\prod_{j=1}^k\widetilde{Q}(p_jx,z) - z^{2}(1+p_kx)\prod_{j=1}^{k-1}\widetilde{Q}(p_jx,z). \end{equation}

Using (16), the case $2\le k<d$ and $\sum_{j=1}^k {I_j}=n-1$ gives, in a similar fashion,

\begin{equation*} \sum_{n\ge 2}\frac{x^n}{n!}\mathbb{E}\Bigg[z^{l_n},\textrm{M}=k,\sum_{j=1}^k{I_j}=n-1\Bigg] = xz\overline{\textrm{F}}(k)\prod_{j=1}^k\widetilde{Q}(p_jx,z) - xz^2\overline{\textrm{F}}(k)\prod_{j=1}^{k-1}\widetilde{Q}(p_jx,z). \end{equation*}

Summing the right-hand side of the above over all $k\in\{2,\ldots,d-1\}$ gives

(21) \begin{equation} \sum_{k=2}^{d-1}z\overline{\textrm{F}}(k)\prod_{j=1}^k\widetilde{Q}(p_jx,z) - xz^2\overline{\textrm{F}}(k)\prod_{j=1}^{k-1}\widetilde{Q}(p_jx,z). \end{equation}

When adding (17), (20), and (21), we notice that the terms with $d-1$ products (of $\widetilde{Q}(\cdot)$ ) cancel. Hence, we obtain the functional relation

\begin{equation*} \widetilde{Q}(x,z) = \prod_{i=1}^d\widetilde{Q}(xp_i,z) + \sum_{k=0}^{d-2}(z-z^2)(1+\overline{\textrm{F}}(k)x)\prod_{i=1}^k\widetilde{Q}(xp_i,z). \end{equation*}

This concludes the proof of Proposition 1.

We remark that from (14) and (16) we could derive a recursive formula for $L_n$ such as [Reference Yu and Giannakis27, (18)]. However, this is of no use to our analysis as the closed-form expression is more direct.

We can also look at the number of collisions $c_n$ , which follow the recursive equations

(22) \begin{equation} c_n = \begin{cases} 0 & \text{if } n\in\{0,1\} , \\[5pt] \mathbf{1}\{M<d\} + \sum_{j=1}^M c_{I_j} & \text{if } n\ge 2 . \end{cases}\end{equation}

In this case, following similar steps as for the throughput, we obtain

(23) \begin{equation} C_n = \sum_{i=2}^n\binom{n}{i}\frac{(-1)^i(i-1)(1-p_d^i)}{1-\sum_{j=1}^d p_j^i}.\end{equation}

The result relies on the functional relation for the exponential moment-generating function

\begin{equation*} \widetilde{R}(x,z) = \sum_{n\ge 0}\frac{x^n}{n!}\mathbb{E}[z^{c_n}] ,\end{equation*}

given by

\begin{equation*} \widetilde{R}(x,z) = (1+x)(1-z) + (z-1)(xp_d+1)\prod_{i=1}^{d-1}\widetilde{R}(p_ix,z) + \prod_{j=1}^d \widetilde{R}(p_jx,z).\end{equation*}

We also give the formula for $S_n$ , the expected number of successes:

(24) \begin{equation} S_n = 1 + \sum_{i=2}^n\binom{n}{i} \frac{(-1)^{i-1}i\big(1-\sum_{k=1}^{d}p_{k}\overline{\textrm{F}}(k-1)^{i-1}\big)}{{1-\sum_{j=1}^dp_j^i}},\end{equation}

based on

\begin{equation*} s_n = \begin{cases} 0 & \text{if } n=0 , \\[5pt] 1 & \text{if } n=1 , \\[5pt] \sum_{j=1}^Ms_{I_j} & \text{if } n\ge 2 . \end{cases}\end{equation*}

Its exponential moment-generating function $ \widetilde{S}(x,z)$ satisfies

\begin{equation*} \widetilde{S}(x,z) = (1-p_1)x(z-1) + \prod_{i=1}^{d}\widetilde{S}(xp_i,z) + \sum_{k=1}^{d-1}xp_{k+1}(1-z)\prod_{i=1}^{k}\widetilde{S}(xp_i,z) .\end{equation*}

Finally, the number of idle slots,

\begin{equation*} i_n = \begin{cases} 1 & \text{if }n=0, \\[5pt] 0 & \text{if }n=1, \\[5pt] \sum_{j=1}^Mi_{I_j} & \text{if }n\ge 2 , \end{cases}\end{equation*}

satisfies $i_n=l_n-c_n-s_n$ and hence

(25) \begin{equation} I_n = \sum_{i=2}^n\binom{n}{i} \frac{(-1)^{i}(i-1)\big(\sum_{k=1}^{d-2}\overline{\textrm{F}}(k)^i+p_d^i+\frac{i}{i-1}\sum_{k=1}^d p_k\overline{\textrm{F}}(k-1)^{i-1}\big)}{1-\sum_{j=1}^dp_j^i} .\end{equation}

Note that the above procedure can be carried out for any observable as long as it is additive as we move down the tree. This is a common occurrence in the analysis of random trees [Reference Drmota5]. A further example of such an additive observable would be the number of nodes with degree R or greater. This might be of interest in practice since the interference of many signals could be difficult to control in terms of noise.

4.2 Asymptotic analysis

We extend the methods from [Reference Mathys and Flajolet19] to allow for asymptotic analysis in both the equal-split and biased cases. The first key identity for our method is

(26) \begin{equation} \frac{1}{1-\sum_{j=1}^d x_j} = \sum_{m\ge0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}\prod_{j=1}^d x_j^{\mu_j} \qquad \text{for } \sum_{j=1}^d|x_j|<1 ,\end{equation}

which follows from the geometric sum and the multinomial formula, as

\begin{equation*} \frac{1}{1-\sum_{j=1}^d x_j} = \sum_{m\ge 0}\Bigg(\sum_{j=1}^dx_j\Bigg)^m = \sum_{m\ge 0}\sum_{\genfrac{}{}{0pt}{}{\mu_1,\ldots,\mu_d\in\mathbb{N}_0}{\mu_1+\cdots+\mu_d=m}}m! \prod_{j=1}^d \frac{x_j^{\mu_j}}{\mu_i!} .\end{equation*}

The other identity is stated in a separate lemma.

Lemma 1. For all $x\in\mathbb{C}$ and all $n\in\mathbb{N}$ ,

(27) \begin{equation} \sum_{i=2}^n\binom{n}{i}(-1)^i(i-1)x^i = 1 - (1-x)^{n-1}(1+(n-1)x) . \end{equation}

Proof. Recall that the binomial theorem gives

\begin{equation*} \sum_{i=1}^n\binom{n}{i}x^i = (1+x)^n - 1, \qquad (1+x)^{n-1}nx = \sum_{i=1}^n\binom{n}{i}ix^i , \end{equation*}

where the second formula follows from the first by taking the derivative and then multiplying both sides by x. Using these,

\begin{align*} \sum_{i=2}^n\binom{n}{i}(-1)^i(i-1)x^i & = \sum_{i=1}^n \binom{n}{i}(-1)^i(i-1)x^i \\[5pt] & = \sum_{i=1}^n \binom{n}{i}(-1)^iix^i-\sum_{i=1}^n \binom{n}{i}(-1)^{i}x^i \\[5pt] & = -(1-x)^{n-1}nx + (1-x)^n+1 \\[5pt] & = 1 - (1-x)^{n-1}(1+(n-1)x) . \end{align*}

We now state the main result of this section.

Proposition 2.

  1. (i) If the equation

    (28) \begin{equation} p_1^{1/k_1} = \cdots = p_d^{1/k_d} \end{equation}
    has no positive integer solution, then
    \begin{equation*} \frac{L_n}{n} = \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} + o(1) . \end{equation*}
  2. (ii) If (28) does have a positive integer solution, then

    (29) \begin{equation} \frac{L_n}{n} = \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} + g_1(n) + o(1), \end{equation}
    where $g_1(n)$ is given in (39).

Note that Proposition 2 establishes the second claim in the proof of Theorem 1.

Proof. Recall that, due to (8),

\begin{equation*} {L_n} = 1 + \sum_{k=0}^{d-2}\sum_{i=2}^n\binom{n}{i} \frac{(-1)^i(i-1)\overline{\textrm{F}}(k)^i}{1-\sum_{j=1}^dp_j^i}. \end{equation*}

It suffices to calculate the asymptotic behavior for $k\in \{0,\ldots,d-2\}$ fixed in this equation and then sum over k. Hence, our goal is to calculate the asymptotic value of

(30) \begin{equation} \frac{1}{n}\sum_{i=2}^n\binom{n}{i}\frac{(-1)^i(i-1)\alpha^i}{1-\sum_{j=1}^dp_j^i} \qquad \text{for } \alpha = \overline{\textrm{F}}(k) . \end{equation}

We first apply (26) to rewrite the above as

\begin{equation*} \sum_{i=2}^n\binom{n}{i}\frac{(-1)^i(i-1)\alpha^i}{1-\sum_{j=1}^dp_j^i} = \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}\sum_{i=2}^n\binom{n}{i}(-1)^i(i-1)\alpha^ip(\mu)^i , \end{equation*}

where $p(\mu)$ was defined in (12). Now we can apply (27) to eliminate the sum over i:

\begin{equation*} L_n = \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}(1-(1-\alpha p(\mu))^{n-1}[1+(n-1)p(\mu)\alpha]). \end{equation*}

In order to increase legibility, we switch from $n-1$ to n. We write $a_n\sim b_n$ if $a_n=b_n(1+o(1))$ as $n\to\infty$ for sequences $(a_n)_n$ , $(b_n)_n$ . We use the expansion $(1-x)^n = \textrm{e}^{-xn+{\mathcal O}(x^2n)}$ , which yields

\begin{equation*} {L_{n+1}} \sim \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}(1-\textrm{e}^{-\alpha n p(\mu)}[1+\alpha np(\mu)]), \end{equation*}

neglecting the $\textrm{e}^{{\mathcal O }(p(\mu)^2n)}$ term, which is negligible as $p(\mu)$ is mostly of order $n^{-1}$ ; see [Reference Knuth14, pp. 130–132] (or [Reference Mathys and Flajolet19, (3.51)]) for proof of this fact. We now write the above as

(31) \begin{equation} {L_{n+1}} \sim \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}f(\alpha n p(\mu)) \qquad \text{where } f(x)=1-\textrm{e}^{-x}(1+x). \end{equation}

For a function $f\colon \mathbb{C}\to\mathbb{C}$ , its Mellin transform and the inverse are given by

(32) \begin{equation} {\mathcal M }[f;\;s] = \int_0^\infty x^{s-1}f(x)\,\textrm{d}x, \qquad f(x) = \frac{1}{2\pi\textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty}x^{-s}{\mathcal M }[f;\;s]\textrm{d}s \end{equation}

for suitable $c\in\mathbb{R}$ [Reference Flajolet, Gourdon and Dumas10]. For $f(x)=1-\textrm{e}^{-x}(1+x)$ , $f(x)={\mathcal O }(x^2)$ as $x\to 0$ , and furthermore $f'(x)=x\textrm{e}^{-x}$ . Hence, using integration by parts in (32), we obtain

\begin{equation*} {\mathcal M }[f;\;s] = -\frac{1}{s}\int_0^\infty x^{s}f'(x)\,\textrm{d}x = -\frac{1}{s}\int_0^\infty x^{s+1}\textrm{e}^{-x}\,\textrm{d}x = -\frac{\Gamma(s+2)}{s} = -(s+1)\Gamma(s) \end{equation*}

as long as $0>\Re(s)>-2$ . Here, $\Gamma$ denotes the (complex) gamma function which satisfies $\Gamma(s+1)=s\Gamma(s)$ for all $s, s+1$ in its domain. We now apply (32) to (31) to obtain

(33) \begin{equation} L_{n+1} \sim \frac{-1}{2\pi\textrm{i}}\sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu} \int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty}{p(\mu)^{-s}\alpha^{-s}n^{-s}(s+1)\Gamma(s)}\,\textrm{d}s \end{equation}

for some $c>-2$ . Note that

\begin{equation*} \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}p(\mu)^{-s} = \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}\prod_{i=1}^d(p_i^{-s})^{\mu_i} = \frac{1}{1-\sum_{j=1}^d p_j^{-s}} , \end{equation*}

using (19) in the last step. If we could interchange integration and summation in (33), this would give

(34) \begin{equation} L_{n+1} \sim \frac{-1}{2\pi\textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} \frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s . \end{equation}

We now make the above rigorous. First, we need to choose $c>-2$ in (33). We first make sure that the function integrated in (34) does not have a pole at $c+\textrm{i}\mathbb{R}$ . Abbreviating $h(s)=\sum_{j=1}^d p_j^{-s}$ , we see that poles occur for $h(s)=1$ . Note that for $x,y\in\mathbb{R}$ , $h(x+\textrm{i}y)=1$ implies that $x\le -1$ , as $h(x+\textrm{i}y)=\sum_{j=1}^d p_j^{-x}\textrm{e}^{\textrm{i}y\log p_j}$ and thus $|h(x+\textrm{i}y)| \le \sum_{j=1}^d p_j^{-x}$ . We first examine the case where $x=-1$ : for y solutions to $h(x+\textrm{i}y)=1$ ,

\begin{equation*} \sum_{j=1}^d p_j^{1}\textrm{e}^{\textrm{i}y\log p_j}=1 \quad \text{and} \quad \sum_{j=1}^d p_j^{1}|\textrm{e}^{\textrm{i}y\log p_j}| = 1 . \end{equation*}

Solutions other than $y=0$ only exist if (28) has no positive integer solution, see [Reference Mathys and Flajolet19, (3.67)]. Next, we show that there exists $\varepsilon>0$ such that there are no solutions $x+\textrm{i}y$ with $x\in [\!-\!1-\varepsilon,-1)$ . Suppose this was not the case; we could then choose two real sequences $(x_n)_n$ , $(y_n)_n$ such that $x_n\to -1$ and

(35) \begin{equation} h(x_n+\textrm{i}y_n) = \sum_{j=1}^d p_j^{-x_n}\textrm{e}^{\textrm{i}y_n\log p_j}=1 . \end{equation}

If (28) has a positive integer solution, then there must exist some $c\in (0,1)$ and some $k_1,\ldots,k_d\in\mathbb{N}$ such that $\textrm{i}y\log p_j = \textrm{i}yk_j\log c$ for all $j=1,\ldots, d$ and $y\in\mathbb{R}$ . Hence, $y\mapsto \textrm{e}^{\textrm{i}y\log p_j}$ is periodic with period at most $2\pi|\log c|\prod_{j=1}^d k_j$ . This implies that we can assume, without loss of generality, that $(y_n)_n$ is a bounded sequence. Thus, there exists a converging subsequence $y_{k_n}\to y_o$ . However, $(x,y)\mapsto\sum_{j=1}^d p_j^{-x}\textrm{e}^{\textrm{i}y\log p_j}$ is holomorphic and non-constant in a neighborhood of $\big({-}1,\sum_{j=1}^d\textrm{e}^{\textrm{i}y_o\log p_j}\big)$ and hence cannot have infinitely many zeros in that neighborhood [Reference Conway4, p. 79]. If (28) has no positive integer solution, we cannot assume that $(y_n)_n$ is bounded. However, as $\textrm{e}^{\textrm{i}y_n\log p_j}$ is bounded, we may assume that $(y_n)_n$ is such that, for each $j\in \{1,\ldots, d\}$ ,

\begin{equation*} \lim_{n\to\infty}\textrm{e}^{\textrm{i}y_n\log p_j} = a_j \end{equation*}

for some $a_j\in\mathbb{C}$ with $|a_j|=1$ [Reference Lawler and Limic16, p. 29]. By continuity, (35) gives $\sum_{j=1}^d p_ja_j=1$ , which implies that $a_j=1$ for all j. But this would imply that the function $\sum_{j=1}^dp_j^{-s}-1$ has infinitely many zeros in any neighborhood around the origin. This is a contradiction. Thus, we can choose an $\varepsilon>0$ such that $h(x+\textrm{i}y)\neq 1$ for all $x\in [\!-\!1-\varepsilon,-1)$ and $y\in\mathbb{R}$ .

Now choose $\varepsilon>0$ such that there are no poles of $(1-h(s))^{-1}$ in the set $[\!-\!1-\varepsilon,-1)\times\textrm{i}\mathbb{R}$ . Abbreviate $c=-1-\varepsilon$ from now on. Next, we show that we can interchange summation and integration, i.e.

\begin{equation*} \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu} \int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty}{p(\mu)^{-s}\alpha^{-s}n^{-s}(s+1)\Gamma(s)}\,\textrm{d}s = \int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty}\frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s. \end{equation*}

For $s=c+\textrm{i}b=-1-\varepsilon+\textrm{i}b$ , we have

\begin{equation*} |{p(\mu)^{-s}\alpha^{-s}n^{-s}(s+1)\Gamma(s)}| \le {p(\mu)^{1+\varepsilon}\alpha^{1+\varepsilon}n^{1+\varepsilon}}|s+1||\Gamma(s)| . \end{equation*}

Recall the Stirling approximation [Reference Erdélyi and Bateman7, Section 1.18, (2)],

\begin{equation*} \Gamma(z) = (2\pi)^{1/2}\textrm{e}^{-z}z^{z-1/2}(1+{\mathcal O}(|z|^{-1})) , \end{equation*}

which is valid uniformly as $|z|\to\infty$ as a long as $\arg(z)<\pi-\delta$ for some arbitrary but fixed $\delta>0$ . Hence, we can bound $|\Gamma(z)| \le {\mathcal O}(|y|^{x-1/2}\textrm{e}^{-\pi|y|/2})$ (see also [Reference Erdélyi and Bateman7, Section 1.18, (6)]), which gives, for $s=c+\textrm{i}b$ , as $|b| \to \infty$ ,

\begin{equation*} |{p(\mu)^{-s}\alpha^{-s}n^{-s}(s+1)\Gamma(s)}| \le {p(\mu)^{1+\varepsilon}\alpha^{1+\varepsilon}n^{1+\varepsilon}}{\mathcal O}(|b|^{-1-\varepsilon-1/2}\textrm{e}^{-\pi|b|/2}) , \end{equation*}

which is integrable. Note that $|\Gamma(c+\textrm{i}b)|$ is bounded for b in a neighborhood around the origin. Furthermore, using (26),

\begin{equation*} \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}p(\mu)^{1+\varepsilon} = \frac{1}{1-\sum_{j=1}^d p_j^{1+\varepsilon}} < \infty . \end{equation*}

Hence, by the dominated convergence theorem, we can interchange summation and integration.

We now want to analyze

$$ L_{n+1} \sim \int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty } \frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s $$

using the residue theorem. For this, take $(\beta_N)_N$ such that $\beta_N\in\mathbb{R}$ , $|\beta_N|\to\infty$ , and $(1-h(s))^{-1}$ has no poles at $\pm\textrm{i}\beta_N$ for all $N\in\mathbb{N}$ . We then choose the following contours $\gamma_N^{(i)}$ for $i=1,2,3,4$ :

\begin{align*} \gamma_N^{(1)} & = \{-1-\varepsilon-\textrm{i}\beta_N t,\, 0\le t\le 2\beta_N\} , \\[5pt] \gamma_N^{(2)} & = \{-1-\varepsilon+\textrm{i}\beta_N+t,\, 0\le t\le M\} , \\[5pt] \gamma_N^{(3)} & = \{-1-\varepsilon+M+\textrm{i}\beta_N-t,\, 0\le 2\le 2\beta_N\} , \\[5pt] \gamma_N^{(4)} & = \{-1-\varepsilon+M-\textrm{i}\beta_N-t,\, 0\le t\le M\} , \\[5pt] \gamma_N & = \gamma_N^{(1)}\cup \gamma_N^{(2)}\cup \gamma_N^{(3)}\cup\gamma_N^{(4)}; \end{align*}

this describes a rectangle with length M, height $2\beta_N$ , and lower-right corner at $-1-\varepsilon-\textrm{i}\beta_N$ , as in [Reference Knuth14, p. 132]. We now proceed analogously to that reference, and write

$$r(s)=\frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}$$

for the function we integrate. By the bounds on the Gamma function, the integral over $\gamma_N^{(2)}$ is at most ${\mathcal O}\big(\alpha^\varepsilon n^\varepsilon[\beta_N+1]\textrm{e}^{-\beta_N} \int_{-1-\varepsilon}^M|M+\textrm{i}\beta_N|^t\,\textrm{d}t\big)$ , and the integral over $\gamma_N^{(4)}$ can be bounded in the same way. For $\gamma_N^{(3)}$ , we use the fact that $\int_{-\infty}^\infty|\Gamma((-1-e)+M+\textrm{i}t)|t\,\textrm{d}t < \infty$ to bound

\begin{align*} \int_{\gamma_N^{(3)}}|r(s)|\,\textrm{d}s & \le {\mathcal O}\bigg((\alpha n)^{-M+1+\varepsilon} \int_{-\infty}^\infty|\Gamma((-1-e)+M+\textrm{i}t)|(t+1)\,\textrm{d}t\bigg) \\[5pt] & \le {\mathcal O}(C_M(\alpha n)^{-M+1+\varepsilon}) \end{align*}

for $C_M$ some constant depending on $M>0$ . Hence, for $\gamma_N=\gamma_N^{(1)}\cup\gamma_N^{(2)}\cup\gamma_N^{(3)}\cup\gamma_N^{(4)}$ ,

(36) \begin{equation} \lim_{N\to\infty}\int_{\gamma_N}r(s)\,\textrm{d}s = {\mathcal O}(C_M(\alpha n)^{-M+1+\varepsilon}) + \lim_{N\to\infty}\int_{\gamma_N^{(1)}}r(s)\,\textrm{d}s . \end{equation}

Take $M>0$ such that $-M+1+\varepsilon<-1$ , which causes the first term on the right-hand side to be negligible as $n\to\infty$ . Let S be the set of poles of r(s) with real part $-1$ :

\begin{equation*} S = \Bigg\{z=-1+\textrm{i}y\text{ with }y\in\mathbb{R} \colon \sum_{j=1}^d p_j^{1}\textrm{e}^{\textrm{i}y\log p_j}=1\Bigg\}. \end{equation*}

Recall that all poles of r(s) have real part bounded from above by $-1$ and that there are no poles with real part in $[\!-\!1-\varepsilon,-1)$ . By the residue theorem,

\begin{equation*} \int_{\gamma_N}r(s)\,\textrm{d}s = \sum_{z\in S\colon|z|\le\beta_N}\textrm{Res}(r(s);\;s=z) , \end{equation*}

and hence, taking the limit $N\to\infty$ and using (34) and (36),

(37) \begin{equation} L_{n+1} \sim \sum_{z\in S}\textrm{Res}(r(s);\;s=z) . \end{equation}

Next, we calculate the residues. We first analyze the pole at $s=-1$ . We expand as $s\to -1$ :

\begin{equation*} 1 - \sum_{j=1}^d p_j^{-s} = 1 - \sum_{j=1}^d p_j\textrm{e}^{-(s+1)\log p_j} \sim 1 - \sum_{j=1}^d p_j(1-(s+1)\log p_j) = (s+1)\sum_{j=1}^d p_j\log p_j . \end{equation*}

Recall that $(s+1)\Gamma(s)\sim -1$ as $s\to -1$ [Reference Erdélyi and Bateman7, Section 1.1, after (8)]. Therefore, as $s\to -1$ ,

\begin{equation*} r(s) = \frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}} \sim -\frac{\alpha n}{(s+1)\sum_{j=1}^d p_j\log p_j} . \end{equation*}

Hence, the residue of the integrand is given by

\begin{equation*} \textrm{Res}(r(s);\;s=-1) = n\frac{\alpha}{-\sum_{j=1}^d p_j\log p_j} . \end{equation*}

Recall that if (28) has no positive integer solution, $s=-1$ is the only pole [Reference Mathys and Flajolet19, (3.67)] and hence the residue theorem gives

\begin{equation*} \int_{\gamma_N}r(s)\,\textrm{d}s = n\frac{\alpha}{-\sum_{j=1}^d p_j\log p_j} . \end{equation*}

Letting $N\to\infty$ gives

\begin{equation*} \frac{-1}{2\pi\textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} \frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s = n\frac{\alpha}{-\sum_{j=1}^d p_j\log p_j} , \end{equation*}

which concludes the proof in that case.

If there are positive integer solutions to (28), then there are a countable infinity of poles. Recall that the set of poles other than $s=-1$ (with real part $-1$ ) is given by the set S. We then have

\begin{equation*} \frac{-1}{2\pi\textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} \frac{\alpha^{-s}n^{-s}(s+1)\Gamma(s)}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s = n\frac{\alpha}{-\sum_{j=1}^d p_j\log p_j} + nf_1(n,\alpha) , \end{equation*}

where, as before, the residue theorem in (37) gives

(38) \begin{equation} f_1(n,\alpha) = \sum_{-1+\textrm{i}y\in S\setminus\{-1\}} \frac{\alpha^{1-\textrm{i}y}n^{1-\textrm{i}y}\Gamma(-1+\textrm{i}y){\textrm{i}y}}{-\sum_{j=1}^d p_j\log p_j} . \end{equation}

By substituting $\alpha=\overline{\textrm{F}}(k)$ and taking the sum over k in (30), we get

\begin{equation*} \frac{L_n}{n} = \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} + g_1(n)+o(1) , \end{equation*}

with, recalling (38),

(39) \begin{equation} g_1(n) = \sum_{k=0}^{d-2}f_1(n,\overline{\textrm{F}}(k)). \end{equation}

This completes the proof of Proposition 2.

For the bounds on $g_1(n)$ in the case $d=2$ and $p_1=p_2$ , we refer the reader to [Reference Mathys and Flajolet19, Table 1].

Similarly, one finds that

(40) \begin{equation} \frac{C_n}{n} = \frac{1-p_d}{-\sum_{j=1}^dp_j\log\!(p_j)} + g_2(n) + o(1) ,\end{equation}

where $g_2(n) = f_1(n,1-p_d)$ . To obtain the asymptotic success rate, more work is needed. We sketch the main steps and leave the rest to the reader.

Lemma 2. For $d\ge 2$ ,

(41) \begin{equation} \frac{S_n}{n} = \frac{\sum_{k=2}^d p_k\log\overline{\textrm{F}}(k-1)}{\sum_{j=1}^d p_j\log p_j} + g_3(n) + o(1) \qquad\textit{as } n\to \infty , \end{equation}

where $g_3$ is given in (42).

Proof. Starting from (24), we can write

\begin{equation*} p_{k}\overline{\textrm{F}}(k-1)^{i-1} = \frac{p_k}{\overline{\textrm{F}}(k-1)}\overline{\textrm{F}}(k-1)^i = \frac{p_k}{\overline{\textrm{F}}(k-1)}q_k^i , \end{equation*}

where we write $\overline{\textrm{F}}(k-1)=q_k$ to keep the ensuing formulas shorter.

Using the expansion as in the proof of Proposition 2, we get

\begin{equation*} S_n \sim \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}f_2(np(\mu)), \quad\text{where } f_2(x) = \sum_{k=1}^d p_kx(\textrm{e}^{-x}-\textrm{e}^{-xq_k}) . \end{equation*}

Note that $f_2(x) \sim x^2\sum_{k=2}^d p_k \textrm{F}(k-1)$ as $x\to 0$ . Here, recall that $\textrm{F}(i)=\sum_{j=1}^i p_j$ . The above expansion gives that, for $\Re(s)>-2$ , the Mellin transform of $f_2$ is well defined and equals

\begin{equation*} {\mathcal M}[f_2;\;s] = \Gamma(s+1)\sum_{k=2}^d p_k(1-q_k^{-1-s}) . \end{equation*}

Furthermore, ${\mathcal M}[f_2;\;s]$ has a removable singularity at $s=-1$ :

\begin{equation*} {\mathcal M}[f_2;\;s] \sim \sum_{k=2}^d p_k\log\overline{\textrm{F}}(k-1)\qquad\text{as }s\to -1 , \end{equation*}

where we used that $\Gamma(s)s\sim 1$ as $s\to 0$ [Reference Erdélyi and Bateman7, Section 1.1, after (8)], as well as $\overline{\textrm{F}}(k-1)=q_k$ . Hence, using (32),

\begin{equation*} \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^d}\binom{m}{\mu}f_2(np(\mu)) = \frac{1}{2\pi\textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} \frac{n^{-s}\Gamma(s+1)\sum_{k=2}^d p_k(1-q_k^{-1-s})}{1-\sum_{j=1}^d p_j^{-s}}\,\textrm{d}s . \end{equation*}

From this, using the residue theorem as in the proof of Proposition 2,

\begin{equation*} \frac{S_n}{n} = \frac{\sum_{k=2}^d p_k\log\overline{\textrm{F}}(k-1)}{\sum_{j=1}^d p_j\log p_j} + g_3(n) + o(1) , \end{equation*}

where

(42) \begin{equation} g_3(n) = \sum_{-1+\textrm{i}y\in S\setminus\{-1\}}\frac{{\mathcal M}[f_2;-\!1+\textrm{i}y]}{-\sum_{j=1}^d p_j\log p_j} , \end{equation}

unless (28) has no positive integer solution, in which case $g_3$ is equal to zero.

We can use the results for $L_n$ , $C_n$ , and $S_n$ to obtain

(43) \begin{equation} \frac{I_n}{n} = \frac{\sum_{k=1}^{d-2}\overline{\textrm{F}}(k)+p_d+\sum_{k=2}^d p_k\log \overline{\textrm{F}}(k-1)} {-\sum_{j=1}^d p_j\log p_j} + g_4(n) + o(1) ,\end{equation}

where $g_4(n)=g_1(n)-g_2(n)-g_3(n)$ ; see (39) for a definition of $g_1$ , (42) for a definition of $g_3$ , and note that $g_2$ is defined just after (40).

4.3. Minimization

In this section we calculate the values of p which maximize throughput and success rate, and minimize collisions and skipped slots.

Recall that $\overline{\textrm{F}}(k)=1-\sum_{j=1}^k p_j$ . To achieve the maximum throughput, we want to minimize the main term in (29), i.e.,

\begin{equation*} \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} .\end{equation*}

We do this in the next lemma.

Lemma 3. For $p^\textrm{bi}\in(0,1)^d$ (first defined in Theorem 1) given by $p^\textrm{bi}_j=2^{-\min\{j,d-1\}}$ (for $j=1,\ldots,d$ ), the function

\begin{equation*} p \mapsto \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} \end{equation*}

is minimized. Furthermore, at $p^\textrm{bi}$ , we have

\begin{equation*} \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j}\Bigg|_{p=p^\textrm{bi}} = \frac{1}{\log\!(2)} . \end{equation*}

No other minima exist besides $p^\textrm{bi}$ .

Note that Lemma 3 establishes the final claim in Theorem 1, and also confirms the prediction from [Reference Deshpande, Stefanović, Gürsu and Kellerer6].

Proof. We write $N=\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)$ and $D=-\sum_{j=1}^d p_j\log p_j$ , so that

\begin{equation*} \frac{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}{-\sum_{j=1}^d p_j\log p_j} = \frac{N}{D} , \end{equation*}

in order to keep the ensuing equations shorter. Suppose that $\mu\in\mathbb{R}$ is our Lagrange multiplier from the Lagrange equation

\begin{equation*} \mu\frac{\textrm{d}}{\textrm{d}p_i}\Bigg({-}1+\sum_{j=1}^dp_j\Bigg) = \frac{\textrm{d}}{\textrm{d}p_i}\frac{N}{D} ,\qquad i=1,\ldots,d . \end{equation*}

We then obtain, for $i\le d-2$ ,

(44) \begin{equation} \mu = \frac{\textrm{d}}{\textrm{d}p_i}\frac{N}{D} = \frac{D(d-1-i) + N(1+\log p_i)}{D^2} \end{equation}

as the parameter $p_i$ appears in $(d-1-i)$ summands in the sum $\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)$ . However, the parameters $p_{d-1}$ and $p_d$ do not appear in the numerator and hence, for $i=d-1,d$ ,

\begin{equation*} \mu = \frac{\textrm{d}}{\textrm{d}p_i}\frac{N}{D} = \frac{N(1+\log p_i)}{D^2} . \end{equation*}

This implies that $p_{d-1}=p_d$ . Note that (44) also holds true for $d-1$ . Using the two equations for $\mu$ and multiplying by $D^2$ shows that, for $1\le i<j<d$ ,

\begin{equation*} D(j-i) = N\log\frac{p_j}{p_i} . \end{equation*}

Hence, for the above choice of i, j,

\begin{equation*} \frac{N}{D} = \frac{j-i}{\log\!({p_j}/{p_i})} . \end{equation*}

Choosing $j=i+1$ , we obtain that, for some $r>0$ , ${p_{j+1}}/{p_j} = r$ for all $j<d-1$ , and hence $p_j=2^{-j}$ for all $j<d$ . Write $p^\textrm{bi}\in (0,1)^d$ for the above distribution, given by $p^\textrm{bi}_i=2^{-\min\{j,d-1\}}$ ; for example, $p^\textrm{bi}=\big(\frac{1}{2},\frac{1}{2}\big)$ (here $d=2$ ) and $p^\textrm{bi}=\big(\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{8}\big)$ (here $d=4$ ).

We have, for $p=p^\textrm{bi}$ ,

\begin{equation*} N=\sum_{k=0}^{d-2}\overline{\textrm{F}}(k) = \sum_{k=0}^{d-2}2^{-k}=2-2^{-d+2}. \end{equation*}

For the denominator, we obtain

\begin{equation*} D = -\sum_{j=1}^d p_j\log p_j = \log\!(2)\Bigg(\sum_{j=1}^{d-1}j2^{-j}+(d-1)2^{-d+1}\Bigg) = \log\!(2)(2-2^{-d+2}), \end{equation*}

where we have used the finite geometric sum formula in the last step. The two equations above imply that, for our throughput maximizing distribution,

\begin{equation*} \frac{L_n}{n} = \frac{1}{\log\!(2)} + g_1(n) + o(n^{-1}) , \end{equation*}

i.e. the leading term of $\log\!(2)$ in the asymptotics of the throughput $n/L_n$ . This concludes the proof.

Alternatively, we can use the following inductive argument for why $L_n$ remains constant for $p^\textrm{bi}$ as $d\ge 2$ varies: For $d=3$ , we can combine the two $\frac14$ -weighted branches into one. As the $\frac12$ -weighted branch has the same law as the one for $d=2$ , and $L_n$ is additive in the branches, this shows that $L_n$ is the same for $d=2$ and $d=3$ , given $p^\textrm{bi}$ as the splitting probability. We can then inductively carry this over to higher values of d.

For $p^\textrm{bi}$ , we also obtain, using (40) and (41),

(45) \begin{equation} \frac{C_n}{n} \sim \frac{1}{2\log\!(2)} + g_2(n), \qquad \frac{S_n}{n} \sim \frac{1}{2} + g_3(n) ,\end{equation}

independently of d, the the number of slots.

Note that we can minimize $C_n$ by setting $p_i=0$ for $i<d$ and $p_d=1$ , which gives $C_n=0$ . However, this is not a sensible choice, as the algorithm will never terminate.

4.4. Collisions versus throughput

As we have seen in the previous section, the throughput-maximizing distribution does not minimize collisions for SICTA. We show (numerically) that a small reduction in throughput can lead to a large reduction in the number of collisions.

For $p=p^\textrm{bi}\in\mathbb{R}^d$ , the previous section showed that maximal throughput has the leading asymptotic term ${\log\!(2)}$ . It was also shown that, for $p=p^\textrm{bi}$ , we have an average of $(2\log\!(2))^{-1}\approx 0.72$ collisions per packet. We now show that by choosing p different from $p^\textrm{bi}$ we can reduce the average number of collisions, while only suffering a small reduction in throughput. For example, a 20% reduction in optimal throughput (from ${\log\!(2)}\approx 0.72$ down to $0.8\times{\log\!(2)}\approx 0.55$ ) allows for a 39% reduction in the number of collisions, from $(2\log\!(2))^{-1}$ collisions per package down to roughly $0.44$ collisions per package. In Figure 2 we have plotted the minimal achievable collision rate, given a throughput reduction of at most x percent, where x ranges from 0% to 20%, i.e. a throughput ranging from ${\log\!(2)}$ to $0.8\log\!(2)$ .

Figure 2. The minimal obtainable collision rate, constrained by achieving a certain throughput rate. The figure was obtained numerically using a standard solver for constraint non-linear optimization problems. $p^\textrm{bi}$ was used as the initial value.

Our numerical method is constructed as follows. We use (40) for the asymptotic leading term of $C_n/n$ , given by

$$\frac{1-p_d}{-\sum_{j=1}^dp_j\log\!(p_j)} \quad\text{and}\quad\frac{-\sum_{j=1}^d p_j\log p_j}{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}$$

for the leading term of $n/L_n$ ; see (29). We then use a multivariable solver to minimize the function

$$p\mapsto \frac{1-p_d}{-\sum_{j=1}^dp_j\log\!(p_j)}$$

over $p\in (0,1)^d$ subject to $\sum_{j=1}^dp_j=1$ and

$$\frac{-\sum_{j=1}^d p_j\log p_j}{\sum_{k=0}^{d-2}\overline{\textrm{F}}(k)}\ge (1-x) \log\!(2),$$

which enforces a throughput reduction of at most $x\%$ . In Figure 2 we let x range from 0 to $0.2$ . The initial value for p for the multivariable solver was choosen as $p=p^\textrm{bi}$ .

The graph in Figure 2 does not change as we vary the number of branches d.

4.5. Delay analysis

In this section we look at SICTA with gated access. We give recursive formulas that allow for an approximation of the mean delay, as well as the transition matrix of the CRI lengths.

We now assume that packets arrive at random times with an arrival rate $\lambda>0$ . Packets wait and accumulate until the algorithm has resolved the previous collision. We define $\{c_{k+1}\}_{k=0}^\infty$ to be the (random) sequence where $c_k$ is the length of the kth CRI assuming a Poisson arrival rate $\lambda>0$ of new packets. Its randomness is twofold: once from the CRI itself, but also from the Poisson arrival of new packages. Let $\{s_{k+1}\}_{k=0}^\infty$ be the number of packages arriving during the kth CRI. If we condition on $c_k=i$ , $s_{k+1}$ is $\textrm{Poi}(\lambda i)$ distributed. Hence, $\{c_{k+1}\}_{k=0}^\infty$ is a Markov chain. Let $\pi=\{\pi_i\}_i$ be the invariant distribution, which exists for $\lambda<\textrm{MST}$ , since the drift $D_i$ is given by

\begin{equation*} D_i = \mathbb{E}[c_{n+1}-c_n\mid c_n=i] = \sum_{k\ge 0}\textrm{e}^{-\lambda i}\frac{(\lambda i)^k}{k!}(L_k-i) = i(\mathbb{E}_{\textrm{Poi}(\lambda i)}[L_{k}/i]-1)\end{equation*}

as the number of new arrivals is $\textrm{Poi}(\lambda i)$ distributed. As $\lambda<\textrm{MST}$ , we can find $\varepsilon>0$ such that, for all k large enough, $L_k/k\le 1/\lambda -\varepsilon$ ; see (5). Recall that $k/i$ converges almost surely and in distribution to $\lambda$ as $i\to\infty$ , if k is $\textrm{Poi}(\lambda i)$ distributed. Hence, $D_i$ will be negative and bounded away from 0 for large enough i. This implies the existence of a stationary distribution, see [Reference Bertsekas and Gallager2, Appendix 3A.5]. The probability that a tagged packet joins the system during a CRI of length n is given by

\begin{equation*} \widetilde{\pi}_n = \frac{n\pi_n}{\sum_{j=1}^\infty j\pi_j }; \end{equation*}

see also [Reference Yu and Giannakis27, (42)].

Suppose we are given that a tagged packet arrives during a CRI of length n. Let $t=t_0+t_2$ be the total delay of that given packet, made up from waiting $t_0$ slots for the previous CRI to finish and then the time in the algorithm itself, denoted by $t_2$ . Note that $t_0$ is independent of $t_2$ , and that $t_0$ is distributed uniformly on (0, n), since the arrival times of a Poisson point process are uniform on a fixed interval. The distribution of $t_2$ is given by $l_{1+R_n}$ , where $R_n=\textrm{Poi}(\lambda n)$ , as $\textrm{Poi}(\lambda n)$ additional packets will enter the queue during an interval of length n and the processing time is given by the CRI length of the new arrivals, $1+R_n\mapsto l_{1+R_n}$ .

4.5.1. Steady-state distribution of the CRI length

In this section we state a functional recursive relation which allows for the computation of the moment-generating function Q(x, z) up to arbitrary order. This recursive relation also allows for an asymptotic computation of the transition matrix $P_{i,j}=\mathbb{P}(c_2=j\mid c_1=i)$ in steady state.

Proposition 3. Recall the moment-generating function $\textrm{e}^{-x}\sum_{n\ge 0}x^n\mathbb{E}[z^{l_n}]/n!$ (denoted by Q(x,z)) used for the computation of moments of $l_n$ . Write

(46) \begin{equation} Q(x,z) = \sum_{j\ge 0}z^j q_j(x) , \end{equation}

where

\begin{equation*} q_j(x) = \sum_{n=0}^\infty\mathbb{P}(l_n=j)\textrm{e}^{-x}\frac{x^n}{n!} . \end{equation*}

For z in the region of convergence given in Proposition 1, there exists a recursive equation which, for every $j\ge 1$ , gives $q_j(z)$ in terms of $\{q_i(z)\}_{i=0}^{j-1}$ ; see (50). Furthermore, $q_0(z)=0$ .

Before embarking on a proof of Proposition 3, we show how it enables us to calculate the transition matrix of the CRI lengths.

Corollary 2. The probability at steady state of observing a CRI length of j after having observed a CRI length of i is given by $P_{i,j}=q_j(\lambda i)$ for $0<\lambda<\textrm{MST}$ .

Proof. Recall that new packets arrive according to a Poisson process with parameter $\lambda>0$ . Furthermore, recall that we have shown that for $\lambda<\textrm{MST}$ a stationary distribution must exist. Given that the current CRI has length i, we can write the probability that the next CRI has length j by doing a case distinction with respect to how many new users n arrive in i slots:

\begin{equation*} P_{i,j} = \sum_{n=0}^\infty\mathbb{P}(s_{k+1}=n\mid c_k=i)\mathbb{P}(l_n=j) = \sum_{n=0}^\infty\mathbb{P}(l_n=j)\textrm{e}^{-\lambda i}\frac{(\lambda i)^n}{n!} = q_j(\lambda i) , \end{equation*}

where $q_j(x)$ was given in Proposition 3 and we recall that the number of new packets is Poisson $\lambda$ in each slot.

We now prove Proposition 3.

Proof of Proposition 3. From (2).

(47) \begin{equation} q_j(x) = \begin{cases} 0 & \text{if }j=0 , \\[5pt] (1+x)\textrm{e}^{-x} & \text{if } j=1 \end{cases} \end{equation}

as $\mathbb{P}(l_n=0)=0$ and $\mathbb{P}(l_n=1) = \mathbf{1}\{n=0,1\}$ .

Recall (46). Now, we use (7) to write

(48) \begin{equation} Q(x,z) = \prod_{j=1}^d Q(xp_j,z) + \sum_{k=0}^{d-2}(z-z^2)(1+\overline{\textrm{F}}(k)x)\textrm{e}^{-\overline{\textrm{F}}(k)x}\prod_{i=1}^k Q(xp_i,z) . \end{equation}

Set

(49) \begin{equation} Q(k,x,j) = \sum_{\mu\in\mathfrak{P}_j^{(k)}}\prod_{i=1}^k q_{\mu_i}(xp_i). \end{equation}

Immediately,

\begin{equation*} Q(k,x,j) = \begin{cases} 0 & \text{if }j=0 , \\[5pt] \sum_{i=0}^k(1+p_i x)\textrm{e}^{-p_i x} & \text{if }j=1 . \end{cases} \end{equation*}

As $q_0(x)=0$ , the largest value $\mu_i$ (for any $i\in\{1,\ldots,k\}$ ) can take in (49) is $j-(k-1)$ , as otherwise at least one of the other $\mu_t$ has to be zero (for $t\in\{1,\ldots,k\}\setminus\{i\}$ ). This means that Q(k, x, j) is a function of $\{q_i(z)\}_{i=0}^{j-(k-1)}$ .

Write $f_k(x) = (1+\overline{\textrm{F}}(k)x)\textrm{e}^{-\overline{\textrm{F}}(k)x}$ . Substituting (46) into (48) yields (see (19) for the mechanism)

(50) \begin{equation} \sum_{j\ge 0}z^jq_j(x) = \sum_{j\ge 0}z^j\Bigg(Q(d,x,j) + \sum_{k=0}^{d-2}f_k(x)(Q(k,x,j-1)\mathbf{1}\{j\ge1\} - Q(k,x,j-2)\mathbf{1}\{j\ge2\})\Bigg) . \end{equation}

This system of equations is recursively solvable for $q_j(x)$ , as the coefficients on the right-hand side depend only on $\{q_i(z)\}_{i=0}^{j-1}$ for each j. Furthermore, the initial conditions for $q_j(x)$ are given in (47).

4.5.2. Collision resolution delay analysis

In this section we give a formula for the mean delay $\mathbb{E}[t_2]$ caused by the resolution of the CRI in steady state.

To calculate the expectation of $t_2$ given that the previous CRI had length n, we employ a case distinction. Set $t_{2,m}$ as the CRI length of a tagged packet, given that there are m other packages. Then, as the arrival rates are $\textrm{Poi}(\lambda n)$ distributed,

\begin{equation*} \mathbb{E}[t_2\mid c_k=n] = \sum_{m\ge 0}\mathbb{E}[t_{2,m}]\textrm{e}^{-\lambda n}\frac{(\lambda n)^m}{m!} = \sum_{m\ge 0}\sum_{k\ge 1}k\mathbb{P}(t_{2,m}=k)\textrm{e}^{-x}\frac{x^m}{m!}\bigg|_{x=\lambda n} ,\end{equation*}

which we abbreviate as $T_2(\lambda n)$ .

Let $g\in\{1,\ldots,d\}$ be the gate which the tagged packet joins. The evolution of $t_{2,m}$ is given by

\begin{equation*} t_{2,m} = \begin{cases} 1 & \text{if }m=0 , \\[5pt] \mathbf{1}\{g<d\} + \sum_{j=1}^{g-1}l_{I_j} + t_{2, I_g} & \text{if }m\ge 1 . \end{cases}\end{equation*}

Set $G_{m+1}(z)=\mathbb{E}[z^{t_{2,m}}]$ and $G(x,z)=\sum_{m\ge 0}G_{m+1}(z)\textrm{e}^{-x}({x^m}/{m!})$ . We first state a proposition giving a recursive equation for G(x, z).

Proposition 4.

(51) \begin{equation} G(x,z) = \sum_{k=1}^d p_k\Bigg(\textrm{e}^{-x}(z-z^{k+\mathbf{1}\{k<d\}}) + z^{\mathbf{1}\{k<d\}}G(p_kx,z)\prod_{i=1}^{k-1}Q(p_ix,z)\Bigg) , \end{equation}

where Q is the moment-generating function of $l_n$ , as previously. As $t_{2,m}\le l_m$ , the power series converges in the same region as in Proposition 1.

Before proving Proposition 4, we explain how we can use it to obtain a formula for $T_2(x)$ . Taking the derivative with respect to z at $z=1$ in (51), we obtain

\begin{equation*} T(x) = \sum_{k=1}^d p_k\Bigg(\textrm{e}^{-x}(1-{k-\mathbf{1}\{k<d\}}) + \mathbf{1}\{k<d\} + T(p_kx) + \sum_{i=1}^{k-1}L(p_ix)\Bigg) ,\end{equation*}

where L(x) is the Poisson generating function for $L_n$ , as in the proof of Corollary 1. Using $\alpha_n$ defined in (11), this implies that, for $T(x)=\sum_{n\ge 0}t_nx^n$ ,

(52) \begin{equation} t_n = \frac{1}{n!}\frac{\sum_{k=1}^dp_k\big((-1)^{n+1}(k-\mathbf{1}\{k=d\}) + \alpha_n\sum_{i=1}^{k-1}p_i^n\big)}{1-\sum_{k=1}^d p_k^{n+1}} .\end{equation}

As in [Reference Yu and Giannakis27], from this equation we can calculate $t_n$ and then numerically approximate the average delay $T_2(\lambda n)$ as $n\to\infty$ .

Proof of Proposition 4. Recall that $g\in\{1,\ldots,d\}$ is the gate the tagged particle joins. Define $G_{m+1}^{(k)}(z) = \mathbb{E}[z^{t_{2,m}}\mid g=k]$ . Note that, by doing a case distinction, $G_{m+1}(z)=\sum_{k=1}^d p_kG_{m+1}^{(k)}(z)$ . Furthermore, by conditioning that $\mu_i$ users join slot i,

\begin{equation*} G_{m+1}^{(k)}(z) = z^{\mathbf{1}\{k<d\}}\sum_{\mu\in\mathfrak{P}_m^{(d)}}\binom{m}{\mu}p(\mu)G_{\mu_k+1}(z) \prod_{i=1}^{k-1}Q_{\mu_i}(z). \end{equation*}

Recall that $G(x,z) = \sum_{m\ge 0}G_{m+1}(z)\textrm{e}^{-x}({x^m}/{m!})$ . We can substitute to obtain

\begin{align*} & G(x,z) = \textrm{e}^{-x}z \\[5pt] & \quad + \sum_{k=1}^d\sum_{m\ge 1}z^{\mathbf{1}\{k<d\}}\sum_{\mu\in\mathfrak{P}_m^{(d)}} \Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) . \end{align*}

Note that for $m=0$ the sum equals $p_k\textrm{e}^{-x}z^{k}$ , and hence

\begin{align*} & \sum_{m\ge 1}\sum_{\mu\in\mathfrak{P}_m^{(d)}}\Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) \\[5pt] & = -p_k\textrm{e}^{-x}z^{k} + \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^{(d)}} \Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}}{\mu_i!\textrm{e}^{p_i x}}\Bigg) \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg). \end{align*}

Now, we split the sum by first considering the subpartition $\{\mu_1,\ldots,\mu_k\}$ , whose cardinality we denote by i, and then the remaining partition $\{\mu_{k+1},\ldots,\mu_{d}\}$ , which consists of $d-k$ parts:

\begin{align*} & \sum_{m\ge 0}\sum_{\mu\in\mathfrak{P}_m^{(d)}}\Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) \\[5pt] & = \Bigg(\sum_{i=0}^\infty\sum_{\mu\in\mathfrak{P}_i^{(k)}}\Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg)\Bigg) \sum_{m=0}^\infty\sum_{\mu\in\mathfrak{P}_m^{(d-k)}} \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) , \end{align*}

where we can switch the order of summation as all the terms are positive.

Note that

\begin{equation*} \textrm{e}^{-x}z - \sum_{k=1}^dz^{\mathbf{1}\{k<d\}}p_k\textrm{e}^{-x}z^{k} = \sum_{k=1}^d p_k(\textrm{e}^{-x}(z-z^{k+\mathbf{1}\{k<d\}})) . \end{equation*}

Furthermore, for $k=1,\ldots,d$ ,

\begin{equation*} \sum_{\mu\in\mathfrak{P}_m^{(d-k)}}\Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg) = \textrm{e}^{-\sum_{i=k+1}^d p_i x}\sum_{\mu\in\mathfrak{P}_m^{(d-k)}} \Bigg(\prod_{i=k+1}^d\frac{(p_ix)^{\mu_i}}{\mu_i!}\Bigg) = 1 . \end{equation*}

Hence, we get

\begin{multline*} G(x,z) = \sum_{k=1}^d\Biggl\{p_k\big(\textrm{e}^{-x}\big(z-z^{k+\mathbf{1}\{k<d\}}\big)\big)z^{\mathbf{1}\{k<d\}} \\[5pt] + z^{\mathbf{1}\{k<d\}}p_k\sum_{i=0}^\infty\sum_{\mu\in\mathfrak{P}_i^{(k)}}\Bigg(\frac{G_{\mu_k+1}(z)p_k^{\mu_k+1}}{\mu_k!} \prod_{i=1}^{k-1}\frac{Q_{\mu_i}(z)(p_i x)^{\mu_i}\textrm{e}^{-p_i x}}{\mu_i!}\Bigg)\Biggr\} . \end{multline*}

We now can simplify the $\sum_{i=0}^\infty\sum_{\mu\in\mathfrak{P}_i^{(k)}}(\ldots)$ as in (19). We hence find that

\begin{equation*} G(x,z) = \sum_{k=1}^d p_k\Bigg(\textrm{e}^{-x}\big(z-z^{k+\mathbf{1}\{k<d\}}\big) + z^{\mathbf{1}\{k<d\}}G(p_kx,z)\prod_{i=1}^{k-1}Q(p_ix,z)\Bigg). \end{equation*}

5. Discussions and conclusion

We have calculated the mean throughput, number of collisions, successes, and idle slots for tree algorithms with successive interference cancellation. We have furthermore given a recursive relation which allows for approximations of arbitrary order for the moment-generating function of the CRI length as well as the mean delay in steady state. We have shown numerically that a small reduction in throughput can lead to a bigger reduction in the number of collisions. Furthermore, our methods can be used for other observables of the random tree algorithm. We hence believe that by emulating our approach, more properties of random tree algorithms can be calculated.

Appendix A. Radius of convergence

In this appendix we prove some bounds on the radius of convergence. As we could not find these results in the literature, we consider them of independent interest.

Lemma 4. Let $p_{\textrm{max}}=\max_{i=j,\ldots,d}p_j$ be the largest splitting probability. Fix $\zeta=\sqrt{p_{\textrm{max}}}$ and $1>\varepsilon>0$ . Then, for all $n\in \mathbb{N}$ and all $z\in\mathbb{C}$ with $|z|\zeta<1-\varepsilon$ ,

\begin{equation*} |Q_n(z)| = |\mathbb{E}[z^{l_n}]| \le \bigg(\frac{|z|}{1-(1-\varepsilon)\zeta}\bigg)^n. \end{equation*}

Proof. Note that for $n=1$ the result is true, as $l_1=1$ almost surely. We prove the lemma via induction. We first show the result holds for $Q_2(z)$ . Note that, using the bound $p_j\le p_{\textrm{max}}$ repeatedly,

\begin{equation*} \sum_{j=1}^d p_j^n \le p_{\textrm{max}}\sum_{j=1}^d p_j^{n-1} \le \cdots \le p_{\textrm{max}}^{n-1}\sum_{j=1}^d p_j = p_{\textrm{max}}^{n-1}\le\zeta^n . \end{equation*}

The packets split according to the feedback from the access point. Let $\tau$ be the first time we observe a partition with strictly more than one non-zero element, i.e. not all packets choose the same slot.

We first consider an initial collision of $n=2$ elements. Note that the probability of all packets choosing the same slot is given by $\sum_{i=1}^d p_i^2$ . As we split independently each time, the probability that $\tau=k$ is bounded by

(53) \begin{equation} \mathbb{P}(\tau=k) = \Bigg(\sum_{i=1}^d p_i^2\Bigg)^{k-1}\Bigg(1-\sum_{i=1}^d p_i^2\Bigg)\le \zeta^{2k-2} . \end{equation}

If $\tau=k$ and $n=2$ , the largest element in the partition after k splits is 1. Recall that $Q_0(z)=Q_1(z)=z$ . Hence,

\begin{equation*} |Q_2(z)| \le \sum_{k=1}^\infty\mathbb{E}[|z|^{l_2},\tau=k] \le \sum_{k=1}^\infty\zeta^{2k-2}|z|^{k-1}|z|^2 = \frac{|z|^2}{1-|z|\zeta^2} \le \frac{|z|^2}{1-(1-\varepsilon)\zeta} , \end{equation*}

where $\zeta^{2k-2}$ is the bound on $\mathbb{P}(\tau=k)$ , $z^{k-1}$ is the factor from the first $k-1$ splits, and $z^2$ comes from the split into two groups.

Now fix $n>2$ initially collided packets. Assume that the statement is true for all k with $k<n$ . Abbreviate $\eta={1}/({1-(1-\varepsilon)\zeta})$ . We then get, as in (53), $\mathbb{P}(\tau=k) \le \zeta^{nk-n}$ . Note that by the induction hypothesis, if we split into a partition with more than one non-zero element (i.e. not all packets choose the same slot), we have a bound on the moment-generating function of

\begin{equation*} \sup_k\sup_{\mu\in\mathfrak{P}_n^{(k)}} \Bigg(\mathbf{1}\{\text{there exists}\ i\neq j\colon\mu_i>0\text{ and }\mu_j>0\} \prod_{i=1}^{k}|Q_{\mu_i}(z)|\Bigg) \le (|z|\eta)^{n-1} , \end{equation*}

as for such a split $l_n$ becomes the sum of $l_i$ where each i is strictly smaller than n. Hence,

\begin{equation*} |Q_n(z)| \le \sum_{k=1}^\infty\mathbb{E}[|z|^{l_n},\tau=k] \le \eta^{n-1}\sum_{k=1}^\infty\zeta^{nk-n}|z|^{k-1}|z|^n = \eta^{n-1}\frac{|z|^n}{1-|z|\zeta^n}. \end{equation*}

However, as

\begin{equation*} \eta^{n-1}\frac{|z|^n}{1-|z|\zeta^n} \le \eta^{n-1}\frac{|z|^n}{1-(1-\varepsilon)\zeta^{n-1}} \le \eta^n|z|^n , \end{equation*}

the result follows.

Acknowledgements

We would like to express our gratitude to both anonymous referees who offered many valuable comments on an earlier version of this article and helped to improve it. We also wish to thank Silke Rolles for her many valuable comments on earlier drafts.

Funding information

Y. Deshpande’s work was supported by the Bavarian State Ministry for Economic Affairs, Regional Development and Energy (StMWi) project KI.FABRIK under grant no. DIK0249.

Competing interests

There were no competing interests to declare that arose during the preparation or publication process of this article.

References

Andreev, S. Pustovalov, E. and Turlikov, A. (2011). A practical tree algorithm with successive interference cancellation for delay reduction in IEEE 802.16 networks. In Analytical and Stochastic Modeling Techniques and Applications, eds M. Gribaudo, E. Sopin and I. Kochetkova, Springer, New York, pp. 301–315.Google Scholar
Bertsekas, D. and Gallager, R. (1992). Data Networks. Athena Scientific, Nashua, NH.Google Scholar
Capetanakis, J. (1979). Tree algorithms for packet broadcast channels. IEEE Trans. Inf. Theory, 25, 505515.CrossRefGoogle Scholar
Conway, J. (1978). Functions of One Complex Variable. Springer, New York.CrossRefGoogle Scholar
Drmota, M. (2009). Random Trees: An Interplay between Combinatorics and Probability. Springer, New York.CrossRefGoogle Scholar
Deshpande, Y., Stefanović, C., Gürsu, H. and Kellerer, W. (2022). Corrections to ‘High-throughput random access using successive interference cancellation in a tree algorithm’. IEEE Trans. Inf. Theory 69, 16581659.CrossRefGoogle Scholar
Erdélyi, A. and Bateman, H. (1981). Higher Transcendental Functions, Vol. I. Robert E. Krieger Publishing Co., Inc., Malabar, FL.Google Scholar
Evseev, G. and Turlikov, A. (2007). Interrelation of characteristics of blocked RMA stack algorithms. Probl. Inf. Transm. 43, 344352.CrossRefGoogle Scholar
Fayolle, G., Flajolet, P. and Hofri, M. (1986). On a functional equation arising in the analysis of a protocol for a multi-access broadcast channel. Adv. Appl. Prob. 18, 441472.CrossRefGoogle Scholar
Flajolet, P., Gourdon, X. and Dumas, P. (1995). Mellin transforms and asymptotics: Harmonic sums. Theoret. Comput. Sci. 144, 358.CrossRefGoogle Scholar
Holmgren, C. (2012). Novel characteristics of split trees by use of renewal theory. Electron. J. Prob. 17, 127.CrossRefGoogle Scholar
Janson, S. and Szpankowski, W. (1997). Analysis of an asymmetric leader election algorithm. Electron. J. Combinatorics 4, R17.CrossRefGoogle Scholar
Knuth, D. (1998), The Art of Computer Programming: Sorting and Searching, Vol. 3. Addison-Wesley Professional, Boston, MA.Google Scholar
König, W. and Kwofie, C. (2023). The throughput in multi-channel (slotted) ALOHA: Large deviations and analysis of bad events. Preprint, arXiv:2301.08180.Google Scholar
König, W. and Shafigh, H. (2022). Multi-channel ALOHA and CSMA medium-access protocols: Markovian description and large deviations. Preprint, arXiv:2212.08588.Google Scholar
Lawler, G. F. and Limic, V. (2010). Random Walk: A Modern Introduction (Cambridge Studies in Advanced Mathematics). Cambridge University Press.CrossRefGoogle Scholar
Massey, J. L. (1981). Collision-resolution algorithms and random-access communications. In Multi-User Communication Systems, ed. G. Longo, Springer, New York, pp. 73–137.CrossRefGoogle Scholar
Mathys, P. (1984). Analysis of random-access algorithms. PhD thesis, ETH Zurich.Google Scholar
Mathys, P. and Flajolet, P. (1985). q-ary collision resolution algorithms in random-access systems with free or blocked channel access. IEEE Trans. Inf. Theory 31, 217243.CrossRefGoogle Scholar
Molle, M. and Shih, A. (1992). Computation of the packet delay in Massey’s standard and modified tree conflict resolution algorithms with gated access. Technical report CSRI-264. Computer Systems Research Institute, University of Toronto.Google Scholar
Navarro-Ortiz, J. et al. (2020). A survey on 5G usage scenarios and traffic models. IEEE Commun. Surv. Tutorials 22, 905929.CrossRefGoogle Scholar
Peeters, G. and Van Houdt, B. (2009). On the maximum stable throughput of tree algorithms with free access. IEEE Trans. Inf. Theory 55, 50875099.CrossRefGoogle Scholar
Peeters, G. and Van Houdt, B. (2015). On the capacity of a random access channel with successive interference cancellation. In Proc. 2015 IEEE Int. Conf. Communication Workshop (ICCW), pp. 2051–2056.CrossRefGoogle Scholar
Stefanović, C., Deshpande, Y., Gürsu, H. and Kellerer, W. (2021). Tree-algorithms with multi-packet reception and successive interference cancellation. Preprint, arXiv:2108.00906.Google Scholar
Stefanović, C., Gürsu, H., Deshpande, Y. and Kellerer, W. (2020). Analysis of tree-algorithms with multi-packet reception. In Proc. GLOBECOM 2020–2020 IEEE Global Communications Conf., pp. 16.CrossRefGoogle Scholar
Wu, Y. et al. (2020). Massive access for future wireless communication systems. IEEE Wireless Commun. 27, 148156.CrossRefGoogle Scholar
Yu, Y. and Giannakis, G. (2007). High-throughput random access using successive interference cancellation in a tree algorithm. IEEE Trans. Inf. Theory 53, 46284639.CrossRefGoogle Scholar
Figure 0

Figure 1. Illustration of the ternary ($d=3$) tree algorithm. The number outside each node represents the slot number. The number inside each node in the tree represents the number of users transmitting in that slot. Slots 5, 8, 9, and 10 will be skipped in the SICTA.

Figure 1

Table 1. Summarizing the results for different observables of SICTA. See Section 4 for more details.

Figure 2

Figure 2. The minimal obtainable collision rate, constrained by achieving a certain throughput rate. The figure was obtained numerically using a standard solver for constraint non-linear optimization problems. $p^\textrm{bi}$ was used as the initial value.