Hostname: page-component-54dcc4c588-ff9ft Total loading time: 0 Render date: 2025-10-12T07:58:35.808Z Has data issue: false hasContentIssue false

Comparison on the criticality parameters for two supercritical branching processes with immigration in random environments

Published online by Cambridge University Press:  08 October 2025

Yingqiu Li*
Affiliation:
School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha, Hunan, PR China Hunan Provincial Key Laboratory of Mathematical Modeling and Analysis in Engineering, Changsha University of Science and Technology, Changsha, Hunan, China
Hailong Yang
Affiliation:
School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha, Hunan, PR China
Rui Li
Affiliation:
School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha, Hunan, PR China
*
Corresponding author: Yingqiu Li; Email: liyq-2001@163.com
Rights & Permissions [Opens in a new window]

Abstract

This paper considers two supercritical branching processes with immigration in different random environments, denoted by $\{Z_{1,n}\}$ and $\{Z_{2,m}\}$, with criticality parameters µ1 and µ2, respectively. Under certain conditions, it is known that $\frac{1}{n} \log Z_{1,n} \to \mu_1$ and $\frac{1}{m} \log Z_{2,m} \to \mu_2$ converge in probability as $m, n \to \infty$. We present basic properties about a central limit theorem, a non-uniform Berry–Esseen’s bound, and Cramér’s moderate deviations for $\frac{1}{n} \log Z_{1,n} - \frac{1}{m} \log Z_{2,m}$ as $m, n \to \infty$. To this end, applications to construction of confidence intervals and simulations are also given.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

As a significant extension of the branching process in a random environment (see [Reference Grama, Liu and Miqueu7, Reference Grama, Liu and Miqueu8, Reference Li, Hu and Liu15, Reference Li, Liu, Gao and Wang18, Reference Wang, Li, Liu and Liu24, Reference Wang, Liu, Li and Liu25] and their references), the branching process with immigration in a random environment (BPIRE) has received extensive attention, Bansaye [Reference Bansaye1] investigated BPIRE by studying a model of cell contamination. Kesten [Reference Kesten, Kozlov and Spitzer13] obtained the limiting distribution of random walks in random environments by using branching processes with one immigration at each generation in an i.i.d. environment. Wang and Liu have obtained the almost surely convergence, Lp convergence, the conditional moments, the quenched moments, the harmonic moments, the exponential decay rate, and the Lp convergence rate under the annealed law about $(W_n);$ and the nondegeneracy, the existence of the p-th moments and the harmonic moments for its limit $W;$ central limit theorem (CLT), the large and moderate deviation principles, and the Berry–Esseen bound for $\log Z_{n}$ [Reference Wang and Liu26Reference Wang and Liu28]. Wang et al. provide the Cramér’s large deviation expansion for $\log Z_{n}$ [Reference Wang, Liu and Fan29]. Li and Huang [Reference Li and Huang16] investigated a polynomial convergence rate of the submartingale to its limit on BPIRE, and the almost surely convergence rate for a submartingale associated with branching process in a varying environment. In [Reference Li, Huang and Peng17], Li et al. considered the convergence rate in probability or distribution, and two forms of the CLTs about $(W_n).$ Huang et al. [Reference Huang, Li and Xiang12] considered the rate of convergence of the CLT under a moment condition of order $2+\delta$, with fixed $\delta \in(0,1]$. Huang et al. [Reference Huang, Wang and Wang10, Reference Huang, Wang and Wang11] showed the moments and the harmonic moments of Zn, the large deviation principle and large deviations for $\log Z_{n},$ and described the decay rates of n-step transition probabilities. For the subcritical and critical cases (with multi-type), Key [Reference Key14] demonstrated the convergence to a limit distribution. Roitershtein [Reference Roitershtein21] investigated CLTs and strong laws of large numbers for the partial sums of this process. Additionally, Vatutin [Reference Vatutin22] applied a multi-type BPIRE to study polling systems with random service regimes.

Despite these contributions, there is no result for comparison on the criticality parameters for two supercritical BPIRE, which hinders their practical application. The objective of the paper is to fill this gap.

Let $(\xi_1, \xi_2)^T =( (\xi_{1,n}, \xi_{2,n})^T)_{n\geq 0}$ be a sequence of i.i.d. two-dimensional random vectors, where T is the transport operator and $ ( \xi_{1,n}, \xi_{2,n})^T \in \mathbb{R}^2$ stands for the random environment at generation n. Thus, $( \xi_{1,n}, \xi_{2,n})_{n\geq 0}^T$ are independent random vectors, but notice that for given n, $\xi_{1,n}$ and $\xi_{2,n}$ may not be independent. For any $n\in \mathbb N$ and $i=1,2$, each realization of $\xi_{i,n}$ corresponds to two probability distributions on $\mathbb{N}=\{0,1,2,\cdots\}$: one is the offspring distribution denoted by

\begin{equation*} p(\xi_{i,n})=\{p_{k}(\xi_{i,n}):k \in \mathbb{N} \},\text{where }p_{k}(\xi_{i,n})\geq 0,\quad \sum_{k}p_{k}(\xi_{i,n})=1,\end{equation*}

the other is the distribution of the number of immigrants denoted by

\begin{equation*}\hat{p}(\xi_{i,n})=\{\hat{p}_{k}(\xi_{i,n});k \in\mathbb{N}\},\text{where }\hat{p}(\xi_{i,n})\geq0,\quad \sum_{k}\hat{p}_{k}(\xi_{i,n})=1.\end{equation*}

Let $\{Z_{1,n}, n\geq 0\}$ and $\{Z_{2,n}, n\geq 0\}$ be two branching processes with immigration in the random environments $\xi_{1,n}$ and $\xi_{2,n}$, respectively. Then, $\{Z_{1,n}, n\geq 0\}$ and $\{Z_{2,n}, n\geq 0\}$ can be described as follows: for $n \geq 0,$

\begin{equation*}Z_{1,0}=1,\ \ \ \ Z_{1,n+1} =Y_{1,n} + \sum_{i=1}^{Z_{1,n}} X_{1, n,i},\ \ \ \ Z_{2,0}=1,\ \ \ \ Z_{2,n+1} =Y_{2,n} + \sum_{i=1}^{Z_{2,n}} X_{2,n,i} , \end{equation*}

where $X_{1,n,i}$ and $X_{2,n,i}$ are the number of offspring of the i-th individual in generation n with environments $\xi_{1,n}$ and $\xi_{2,n}$, respectively. $ Y_{1,n} $ and $ Y_{2,n} $ are the number of new immigrants in the n-th generation with environments $\xi_{1,n}$ and $\xi_{2,n}$. Given $(\xi_{1,n},\, \xi_{2,n})^T$, the random variables $\{X_{1, n,i},X_{2, n,i},i\geq 1\}$ and $\{Y_{1,n},Y_{2,n}\}$ are mutually independent.

Let $(\Gamma, \mathbb P_{\xi})$ be the probability space under which the process is defined when the environment ξ is given. The total probability space can be formulated as the product space $(\Gamma\times \Theta^{\mathbb N}, \mathbb P)$, with $\mathbb P(dx, d\xi) =\mathbb P_{\xi}(dx)\tau(d\xi).$ Usually, the conditional probabilities $\mathbb{P}_{\xi_1}$ and $\mathbb{P}_{\xi_2}$ are called the quenched laws, while the total probability $\mathbb{P}$ is called the annealed law. We further define two laws $\mathbb P_{\xi_i,Y_i}$, $i=1,2$, which denote the conditional probabilities of $\mathbb P$ given $(\xi_i, Y_i)$ where $Y_i=(Y_{i,0},Y_{i,1},...), i=1,2$. Additionally, we denote $\mathbb{P}_{\xi_1 ,\xi_2}$ may be considered to be the conditional probability when the environment $(\xi_1,\, \xi_2)^T$ is given, and τ is the joint law of the environment $(\xi_1,\, \xi_2)^T$. Then,

\begin{equation*}\mathbb{P}(dx_1, dx_2, dy_1, dy_2)=\mathbb{P}_{\xi_1, \xi_2} (dx_1, dx_2)\tau (dy_1, dy_2)\end{equation*}

is the joint law of the two branching processes in random environment. In the sequel, the expectation with respect to $\mathbb P_{\xi_1,\xi_2}$ $(\text{resp. }\mathbb P_{\xi_i,Y_i}, \mathbb P_{\xi},\mathbb P)$ will be denoted by $\mathbb E_{\xi_1,\xi_2}$ $(\text{resp. }\mathbb E_{\xi_i,Y_i}, \mathbb E_{\xi},\mathbb E)$.

We define, for any $ n\ge 0 $ and $ a\ge 0 $,

\begin{equation*} m_{1,n}^{(p)} \left ( a \right ) = \sum_{k=0}^{\infty} k^p \, p_k(\xi_{1,n} ),\ \ \quad m_{2,n}^{(p)}\left ( a \right ) = \sum_{k=0}^{\infty} k^p \, p_k(\xi_{2,n} ),\end{equation*}
\begin{equation*} \quad \Pi_{1,n} = \prod_{i=0}^{n-1} m_{1,i}, \ \ \quad \Pi_{2,n} = \prod_{i=0}^{n-1} m_{2,i},\end{equation*}

with the convention that $ \Pi_{1,0} = \Pi_{2,0} = 1$. Moreover

\begin{equation*}\hat{m}_{1,n}^{(p)}(a)=\sum_{k=0}^{\infty}k^{a}\hat{p}_{k}(\xi_{1,n}), \quad \hat{m}_{2,n}^{(p)}(a)=\sum_{k=0}^{\infty}k^{a}\hat{p}_{k}(\xi_{2,n}).\end{equation*}

Clearly, $(m_{1,n}^{(p)})_{n\geq 0}$ and $(m_{2,n}^{(p)})_{n\geq 0}$ are two sequences of i.i.d. random variables and we denote

\begin{align*} X_{1,n} &= \log m_{1,n} ,\ \ \ \ \ \ X_{2,n} = \log m_{2,n}, \ \ \ \ \ \ \mu_1 = \mathbb{E} \log m_{1,0}, \ \ \ \ \ \ \ \ \ \mu_2 = \mathbb{E} \log m_{2,0}, \\ \sigma_1 ^2 &= \textrm{Var}(\log m_{1,0} ) , \ \ \ \ \ \ \sigma_2 ^2 = \textrm{Var}( \log m_{2,0}),\ \ \ \ \displaystyle \rho = \frac{\textrm{Cov}(X_{1,n}, X_{2,n})}{\sigma_1\sigma_2 } , \end{align*}

where µ 1 and µ 2 are known as the criticality parameters for BPIREs $\{Z_{1,n}, n\geq 0\}$ and $\{Z_{2,n}, n\geq 0\}$, respectively. In particular, if ξ 1 and ξ 2 are independent, we have ρ = 0. To avoid the environments ξ 1 and ξ 2 are degenerate, we assume that $ 0 \lt \sigma_1, \sigma_2 \lt \infty.$ For $l=1,2,$ to establish some limit theorems on $ Z_{l,n} $ and the fundamental submartingale, we shall use the decomposition of $ Z_{l,n}$, similar to the approach used in [Reference Wang and Liu26].

For simplicity, we will primarily concentrate on the case of $ Z_{1,n}. $ To include the immigrants in the family tree, we introduce one particle at each time n which we call eternal particle, denoted by $0_{0},0_{1},0_{2},\cdots$ with $0_{n}:=0_{n-1}0$ (the juxtaposition of 00 with n times 0), and we consider that the $Y_{1,n}$ immigrants are direct children of 0n. To form a tree, we also consider that each $0_{n+1}$ is a direct child of 0n. Let $ E=\left \{0_{k} : k \gt 0 \right \} $ represent the set of all virtual particles introduced and assume that the $ Y_{1,n} $ particles moved into the n + 1 generation are the descendants of the virtual particle 0n introduced in the n-th generation. To construct a complete family tree, we assume that the virtual particle $ 0_{n+1} $ introduced in the n + 1 generation is also the offspring of the virtual particle 0n introduced in the n-th generation.

To enhance accessibility, we use “∼” to represent the pedigree with the initial particle ϕ without the immigrating particle, and “ $ \wedge $” to represent the pedigree with the initial particle 00 including the immigrated particle. Therefore, the $ \tilde{Z} _{1,n} $ represents the branching process in the random environment excluding the immigrating particles, the other $ \hat {Z } _{1,n} $ represents the branching process including the immigrating particles in the random environment, then

\begin{equation*} {Z } _{1,n}=\tilde {Z } _{1,n}^{\left ( \phi \right ) } +\hat{Z} _{1,n}^{\left ( 0_{0} \right ) } -1 ,n\ge 0. \end{equation*}

Set

\begin{equation*} W_{1,n}=\frac{Z_{1,n}}{\Pi_{1,n}}, \ \ \tilde {W}_{1,n}^{\left (\phi\right )}=\frac{\tilde{Z}_{1,n} }{\Pi _{1,n} } \ \ and \ \ \hat{W}_{1,n}^{\left (0_{0}\right )}=\frac{\hat {Z}_{1,n}}{\Pi_{1,n}},\end{equation*}

it is obvious that

(1.1)\begin{eqnarray} {W } _{1,n}=\tilde {W } _{1,n}^{\left (\phi\right )}+\hat{W} _{1,n}^{\left (0_{0}\right )}-\Pi _{1,n}^{-1} . \end{eqnarray}

The sequence $ \tilde {W } _{1,n}^{\left ( \phi \right ) } $ is the well-known martingale associated with the branching process $ \tilde{Z}_{1,n} $ (without immigration) in a random environment, and its asymptotic properties have been extensively studied. We will break down the branching processes with immigration, which begin with an eternal particle $0_{n} \in E$, in terms of branching processes (without immigration) in random environment, we have

\begin{equation*} \hat{W} _{1,n}^{\left ( 0_{0} \right ) }=\frac{1}{m_{1,0} } \hat{W} _{1,n-1}^{\left ( 0_{1} \right ) }+\frac{1}{m_{1,0} } \sum_{i=1}^{Y_{1,0} } \hat{W} _{1,n-1}^{\left ( 0_{0}i \right ) }. \end{equation*}

For the case of a single supercritical BPIRE, denoted by $\{Z_{1,n}, n\geq 0\}$, the normal approximation has been extensively studied. Given the additional conditions $ \mathbb{E}\left ( \frac{Z_{1,0} }{m_{1,0}} \right )^{p} \lt \infty $ and $ \mathbb{E}\left ( \frac{Y_{1,0} }{m_{1,0}} \right ) ^{p} \lt \infty $ for a constant $ p \gt 1,$ and $ \mathbb{E} X_{1,0}^{2+\delta} \lt \infty $ for a constant $\delta \in (0, 1]$. Wang et al. have derived the following Berry–Esseen bound for $\log Z_{1,n}$ in their work [Reference Wang and Liu27]:

(1.2)\begin{equation} \sup_{x \in R} \Big|\mathbb{P}\big( \frac{\log Z_{1,n}-n\mu_{1} }{\sigma \sqrt{n} }\leq x \big) - \Phi(x) \Big| \leq \frac{C} { n^ {\delta/2} }, \end{equation}

where $\Phi(x)$ is the standard normal distribution function.

Assume Cramér’s condition $ \mathbb{E}e^{\lambda _{0} X_{1,0} } \lt \infty $ for a constant $ \lambda_{0} \gt 0,$ and $\mathbb{E}\left (\frac{Z_{1,0}^{p}}{m_{1,0}}\right ) \lt \infty, \mathbb{E}\left (\frac{Y_{1,0}^{p}}{m_{1,0}}\right ) \lt \infty$ for a constant $ p \gt 1,$ Wang et al. [Reference Wang, Liu and Fan29] also have established the following Cramér’s large deviation expansion: for $0 \le x=o\left ( n \right ) ,n\to \infty $,

(1.3)\begin{eqnarray} \Big| \log \frac{\mathbb{P}\big( \frac{\log Z_{1,n}-n\mu_{1}}{\sigma \sqrt{n} } \leq x \big)} {1- \Phi(x) } \Big| \leq C \frac{1+ x^ {3}}{\sqrt{n}} \end{eqnarray}

where C is positive constant. For instance, when the parameter σ 1 is known, they can be applied to construct confidence intervals for estimating the criticality parameter µ 1. This estimation is formulated by considering both the observation $Z_{1,n}$ and the generation n, providing a more precise understanding of the process.

Although the limit theorems for a single supercritical BPIRE have been extensively studied, there currently exists no comparative result concerning the criticality parameters for two supercritical BPIREs. The objective of the paper is to fill this gap. We begin by considering the following common hypothesis testing:

\begin{equation*}H_0: \mu_1-\mu_2=0 \ \ or \ \ H_1: \mu_1 -\mu_2 \ne 0.\end{equation*}

When µ 1 and µ 2 represent the means of two independent populations, this form of hypothesis testing has been considered by Chang et al. [Reference Chang, Shao and Zhou3]. Within their work, they have established Cramér type moderate deviations. In this paper, we are interested in the case where µ 1 and µ 2 are two criticality parameters of BPIREs. By the law of large numbers, $\frac{1}{n} \log Z_{1,n} \to \mu_1$ and $\frac{1}{m} \log Z_{2,m} \to \mu_2$ converge in probability as $m,n\to\infty$, respectively. Therefore, to test the hypothesis, it is essential to estimate the asymptotic distribution of the random variable $\frac{1}{n} \log Z_{1,n} - \frac{1}{m} \log Z_{2,m} $, this estimation is central to the main purpose of this paper. Observe that the expression $\frac{1}{n} \log Z_{1,n} - \frac{1}{m} \log Z_{2,m} $ has an asymptotic distribution equivalent to $\frac{1}{n}\sum_{k=1 }^{n} X_{1,k} - \frac{1}{m}\sum_{k=1 }^{m} X_{2,k}$. When ξ 1 and ξ 2 are independent, both $\sum_{k=1 }^{n} X_{1,k} $ and $ \sum_{k=1 }^{m} X_{2,k} $ are sums of i.i.d. random variables.

In this paper, we always assume $ l=1,2, $

(1.4)\begin{equation} \mathbb{E} \log^{+} \frac{Y_{l,0}}{m_{l,0}} \lt \infty \ \ and \ \ \mathbb{E}(\log m_{l,0}) \gt 0 \end{equation}

which means that the process is supercritical. We assume that the following conditions hold:

(1.5)\begin{equation} \mathbb{E}\bigg[ \frac{Z_{1,1}}{m_{1,0}} \log^+ Z_{1,1} + \frac{Z_{2,1}}{m_{2,0}} \log^+ Z_{2,1} \bigg] \lt \infty, \end{equation}

write $ \log ^+ x = \max\{\log x , 0 \}. $ From Grama et al. [Reference Grama, Liu and Miqueu7], it can be inferred that under the conditions (1.4) and (1.5), $ {W} _{n} $ converges almost surely to a non-negative random variable $W.$ Additionally, we assume the following condition:

(1.6)\begin{equation} p_0(\xi_{1,0})=p_0(\xi_{2,0} ) =0, \ \ \ \ \ \ \ a.s. \end{equation}

which ensures that the random walk has positive increments and states that each individual has at least one offspring. Assumptions (1.5) and (1.6) imply that the processes $(Z_{1,n}, n\geq 0)$ and $(Z_{2,m}, m\geq 0)$ are both supercritical and satisfy $\mu_1, \mu_2 \gt 0$ and $Z_{1, n} \to \infty, Z_{2, m} \to \infty $.

Define

\begin{align*} R_{m,n}&:= \frac{\frac{1}{n} \log Z_{1,n} - \mu_1 - \frac{1}{m} \log Z_{2,m} + \mu_2 }{V_{m,n,\rho}},\\ V_{m,n,\rho} &= \sqrt{\frac1n \sigma_1^2 + \frac1m \sigma_2^2 -2 \rho \sigma_1 \sigma_2 \frac{m \wedge n}{m \, n} \ } , \ \ n, m \in \mathbb{N}.\end{align*}

Throughout the paper, we assume either

\begin{equation*} \rho \in [-1, 1) \ \ \ \ \textrm{or} \ \ \ \ \rho =1 \ \text{but} \ \sigma_1 \ne \sigma_2 .\end{equation*}

The final condition guarantees that

\begin{equation*}\frac1n \sigma_1^2 + \frac1m \sigma_2^2 -2 \rho \sigma_1 \sigma_2 \frac{m \wedge n}{m \, n}\end{equation*}

is in order of $\frac{1}{m \wedge n} $ as $m,n \to \infty. $ Clearly, it is easy to see that if $ m \le n ,$

\begin{equation*}\frac1n \sigma_1^2 + \frac1m \sigma_2^2 -2 \rho \sigma_1 \sigma_2 \frac{m \wedge n}{m \, n} \ = (\frac1m -\frac1n)\sigma _{2}^{2} +\frac{\sigma _{1}^{2}-2\rho\sigma _{1}\sigma _{2}+\sigma _{2}^{2} }{n}\asymp \frac{1}{m}. \end{equation*}

We now introduce our main results. First, Theorem 2.1 presents the CLT for $R_{m,n}:$ for all $x \in \mathbb{R},$ it holds

(1.7)\begin{equation} \lim_{m \wedge n \rightarrow \infty }\mathbb{P}\big( R_{m,n} \leq x \big) = \Phi(x). \end{equation}

Second, under some moment conditions, Theorem 2.2 gives a non-uniform Berry–Esseen bound for $R_{m,n}$: for any $\delta' \in (0, \delta)$ and all $x \in \mathbb{R},$

(1.8)\begin{equation} \bigg|\mathbb{P}\big( R_{m,n} \leq x \big) - \Phi(x) \bigg| \leq \frac{ C }{ (m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'} }. \end{equation}

According to Lemma 4.3 and (3.16) in the paper, under the given conditions, we conclude that $R_{m,n}$ only has a finite moment of order $1+\delta'$. This explains why the non-uniform Berry–Esseen bound exhibits an order of $\displaystyle |x|^{-1-\delta'}$ as $x \rightarrow \infty$, instead of an order of $\displaystyle |x|^{-2-\delta}$. In particular, we have $\frac{1}{m} \log Z_{2,m} \rightarrow \mu_2$ in probability when $m \rightarrow \infty,$ which leads to $R_{m,n} \rightarrow \frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}}$ in probability. Thus, inequality (1.8) implies that

\begin{equation*} \sup_{x\in \mathbb{R}} \bigg|\mathbb{P}\Big( \frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}} \leq x \Big) - \Phi(x) \bigg| \leq \frac{ C }{ (m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'} }, \nonumber \end{equation*}

which improves the Berry–Esseen bound (1.2) by adding a factor $\frac{ 1 }{ 1+|x|^{1+\delta'} } .$

Third, we establish Cramér’s moderate deviations. Assuming conditions A3, A4, and A5 are satisfied, Theorem 2.3 demonstrates that for all $0 \leq x \leq c^{-1} \sqrt{m \wedge n } $,

(1.9)\begin{equation} \Bigg| \log \frac{\mathbb{P}\big( R_{m,n} \geq x \big)}{1-\Phi(x)} \Bigg| \leq C \frac{1+x^3 }{ \sqrt{m \wedge n} \ }. \end{equation}

When $m\rightarrow \infty,$ it is easy to see that (1.9) holds with $R_{m,n}$ replaced by $\frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}} $. Therefore, our results reconfirm the Cramér’s moderate deviations (1.3) as initially established by Wang et al. To conclude, we explore the creation of confidence intervals for $\mu_1 - \mu_2$ as an application of our finding.

We now explain briefly the organization of this paper. In Section 2, we present our main results. Some applications are demonstrated in Section 3. The proofs of some results in Section 2 are given in Section 3.2.

Additionally, the symbols c and C are used to represent a small positive constant and a large positive constant, respectively. Their values may vary from line to line. For two sequences of positive numbers $(a_{n} )_{n\ge 1}$ and $(b_{n} )_{n\ge 1}$, we write $a_n \asymp b_n$ if there exists a positive constant C such that for all n, it holds $C^{-1}b_n \leq a_n \leq C b_n$.

2. Main results

To better study the properties, we make the following conditions:

A1.

There exists a constant $\delta \in (0, 1]$ such that

\begin{equation*} \mathbb{E} [ X_{1,0}^{2+\delta}+ X_{2,0}^{2+\delta} \,] \lt \infty. \end{equation*}
A2.

There exists a constant p > 1 such that

\begin{equation*}\mathbb{E}\bigg[ \frac{Z_{1,1} ^{p}}{m_{1,0}^p} + \frac{Z_{2,1} ^{p}}{m_{2,0}^p} \bigg] \lt \infty .\end{equation*}
A3.

There exists a constant p > 1 such that

\begin{equation*} \mathbb{E}\left ( \frac{Y_{1,0}^{p} }{m_{1,0}^{p} } + \frac{Y_{2,0} ^{p}}{m_{2,0}^{p} }\right ) \lt \infty. \end{equation*}

Theorem 2.1 For all $x \in \mathbb{R},$ we have

(2.1)\begin{equation} \lim_{m \wedge n \rightarrow \infty }\mathbb{P}\big( R_{m,n} \leq x \big) = \Phi(x). \end{equation}

The following theorem gives a non-uniform Berry–Esseen bound for $R_{m,n}$.

Theorem 2.2 Assume that conditions A1, A2, and A3 hold. Let $\delta'$ be a constant such that $\delta' \in (0, \delta).$ Then for all $x \in \mathbb{R},$

(2.2)\begin{equation} \Big|\mathbb{P}\big( R_{m,n} \leq x \big) - \Phi(x) \Big| \leq \frac{ C }{ (m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'} }. \end{equation}

Under the conditions A1, A2, and A3, and during the proof of the theorem, it can be shown that $R_{m,n}$ has a finite moment of order $1+\delta'$. This explains why the non-uniform Berry–Esseen bound (2.2) decays at the rate $|x|^{-1-\delta'}$ rather than $|x|^{-2-\delta}$ as $x \to \infty$. According to Theorem 2.2, we can establish the following Berry–Esseen bounds for $R_{m,n}$.

Corollary 2.3. Assume that conditionsA1, A2, and A3 hold. Then

(2.3)\begin{equation} \sup_{x\in \mathbb{R}} \Big|\mathbb{P}\big( R_{m,n} \leq x \big) - \Phi(x) \Big| \leq \frac{ C }{ (m\wedge n)^{\delta/2} } . \end{equation}

Note that $\frac{1}{m} \log Z_{2,m}$ converges in probability to µ 2, thus,

\begin{equation*}R_{\infty,n} :=\lim_{m \rightarrow \infty }R_{m,n} = \displaystyle \frac{\log Z_{1,n}-n\mu_1}{\sigma_1 \sqrt{n}} \end{equation*}

in probability. Therefore, when $m\rightarrow \infty,$ Corollary 2.3 yields the Berry–Esseen bound established by Huang and Liu [Reference Wang and Liu27], that is,

\begin{equation*} \sup_{x\in \mathbb{R}} \bigg|\mathbb{P}\Big( \frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}} \leq x \Big) - \Phi(x) \bigg| \leq \frac{ C }{ n^{\delta/2} } . \nonumber \end{equation*}

It known that the convergence rate of the last Berry–Esseen bound aligns with the best achievable rate for i.i.d. random variables with finite moments of order $2+\delta$.

Next, we will establish Cramér’s moderate deviations for $R_{m,n}$. To achieve this, we require the following conditions.

A4.

The random variables $X_{1,0} $ and $X_{2,0} $ have exponential moments, i.e. there exists a constant $\lambda_0 \gt 0 $

such that

\begin{equation*} \mathbb{E} \big[ e^{\lambda_0 X_{1,0} } + e^{\lambda_0 X_{2,0} } \big] \lt \infty. \end{equation*}
A5.

There exists a constant p > 1 such that

\begin{equation*} \mathbb{E} \bigg[ \frac{Z_{1,1} ^{p}}{m_{1,0}} + \frac{Z_{2,1} ^{p}}{m_{2,0}} \bigg] \lt \infty. \end{equation*}

We have the following Cramér’s moderate deviations for $R_{m,n}$.

Theorem 2.3 Assume that conditions A3, A4, and A5 hold. Then for all $0 \leq x \leq c\, \sqrt{m \wedge n} ,$

(2.4)\begin{equation} \Bigg|\log \frac{\mathbb{P}\big( R_{m,n} \geq x \big)}{1-\Phi(x)} \Bigg| \leq C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ } . \end{equation}

By the symmetry between m and n, Theorems 2.12.3 hold true when $R_{m,n}$ is replaced by $-R_{m,n}$. Consequently, we have $-R_{m,n} = R_{n,m}.$

By a similar argument to the proof of theorem 7.3 in [Reference Wang and Liu26], it becomes evident that Theorem 2.3 implies the subsequent moderate deviation principle (Markov decision process) result for $R_{m,n}$.

Corollary 2.4. Assume that conditions A3, A4, and A5 hold. Let an be a sequence of positive numbers satisfying

\begin{equation*} \frac{a_n }{ m \wedge n } \to 0 \ \ and \ \ \frac{a_n }{ \sqrt{m\wedge n} } \to \infty, \ \ as \ \ m\wedge n \rightarrow \infty. \end{equation*}

Then, for any measurable subset B of $\mathbb{R} $,

(2.5)\begin{eqnarray} - \inf_{x \in B^o}\frac{x^2}{2} &\leq & \liminf_{n\rightarrow \infty}\frac{1}{a_n^2} \mathbb{P}\bigg( \frac{R_{m,n} }{a_n }\in B \bigg) \nonumber \\ &\leq & \limsup_{n\rightarrow \infty}\frac{1}{a_n^2}\log \mathbb{P}\bigg(\frac{R_{m,n} }{a_n } \in B \bigg) \leq - \inf_{x \in \overline{B}}\frac{x^2}{2}, \end{eqnarray}

where Bo and $\overline{B}$ denote the interior and the closure of B, respectively.

3. Applications and simulations

3.1. Applications to construction of confidence intervals

In this section, we focus on the construction of confidence intervals for $\mu_1 - \mu_2$. When we have known of the parameters $\sigma_1, \sigma_2, $ and ρ, we can use Theorems 2.2 and 2.3 to establish confidence intervals for $\mu_1 - \mu_2$.

Proposition 3.1. Let $ \kappa_{m,n} \in (0,1) $, Consider the following two groups of conditions:

H1.

The conditions of Theorem 2.2 hold and

(3.6)\begin{eqnarray} \left|\log \kappa_{m,n}\right|=o\big(\log (m \wedge n) \big), \ \ \ \textrm{as}\ m \wedge n\rightarrow \infty . \end{eqnarray}
H2.

The conditions of Theorem 2.3 hold and

(3.7)\begin{eqnarray} \left|\log \kappa_{m,n}\right|=o\big((m \wedge n) ^{1/3} \big), \ \ \ \textrm{as}\ m \wedge n\rightarrow\infty. \end{eqnarray}

Assume H1 or H2 holds, Then $\left[A_{m,n},\, B_{m,n}\right] $, with

\begin{eqnarray*} A_{m,n}= \frac{1}{n} \log Z_{1,n}- \frac{1}{m} \log Z_{2,m} - V_{m,n,\rho} \Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right),\\ B_{m,n}=\frac{1}{n} \log Z_{1,n}- \frac{1}{m} \log Z_{2,m} + V_{m,n,\rho} \Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right), \end{eqnarray*}

is a $1-\kappa_{m,n}$ confidence interval for $\mu_1-\mu_2$, when $m \wedge n$ is sufficiently large.

Proof. Assume H1 holds. Theorem 2.2 implies that, as $m\wedge n\rightarrow \infty$,

(3.8)\begin{align} \frac{\mathbb{P}\left(R_{m,n} \gt x\right)}{1-\Phi(x)}=1+o(1)\quad\text{and}\quad \frac{\mathbb{P}\left(R_{m,n} \lt -x\right)}{\Phi(-x)}=1+o(1) \end{align}

uniformly for $ 0\leq x=o\left(\sqrt{\log (m\wedge n) }\right).$ For $ p\searrow 0 $, the quantile function of the standard normal distribution has the following asymptotic expansion

\begin{equation*} \Phi^{-1}(p)=-\sqrt{\log \frac{1}{p^{2}}-\log \log \frac{1}{p^{2}}-\log (2 \pi)}+o(1) .\end{equation*}

Specifically, when $\kappa_{m,n}$ satisfies (3.6), the upper $\left(1-\frac{\kappa_{m,n}}{2} \right)$-th quantile of standard normal distribution satisfies

\begin{equation*} \Phi(\Phi^{-1}(1-\frac{\kappa_{m,n}}{2}))=1-\frac{\kappa_{m,n}}{2}=1-\Phi(\Phi^{-1}(\frac{\kappa_{m,n}}{2}))=\Phi(-\Phi^{-1}(\frac{\kappa_{m,n}}{2})), \end{equation*}

hence

\begin{align*} \Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)=-\Phi^{-1}\left(\frac{\kappa_{m,n}}{2}\right)=O\left(\sqrt{\left|\log \kappa_{m,n}\right|}\, \right), \end{align*}

which, by (3.6), is of order $ o\left(\sqrt{\log(m\wedge n) }\right).$ Then applying the last equality to (3.8), we obtain the result

(3.9)\begin{align} \mathbb{P}\left(R_{m, n} \gt \Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\right) \sim \frac{\kappa_{m,n}}{2} \end{align}

and

(3.10)\begin{align} \mathbb{P}\left(R_{m, n} \lt -\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\right) \sim \frac{\kappa_{m,n}}{2}, \end{align}

as $ m\wedge n\rightarrow \infty $. Note that, $R_{m, n}\leq\Phi^{-1}(1-(\kappa_{m,n}/2))$ means $\mu\geq A_{m,n}$, while $R_{m, n}\geq-\Phi^{-1}(1-(\kappa_{m, n}/2))$ means $\mu\leq B_{m,n}$. Thus, as $ m \wedge n\rightarrow \infty $,

(3.11)\begin{align} \mathbb{P}\bigg(-\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\leq R_{m, n} \leq\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\bigg)\sim 1-\kappa_{m,n}. \end{align}

Next, assume H2 holds. By Theorem 2.3, as $m\wedge n\rightarrow \infty$, we have

(3.12)\begin{align} \frac{\mathbb{P}\left(R_{m, n} \gt x\right)}{1-\Phi(x)}=1+o(1)\quad\text{and}\quad \frac{\mathbb{P}\left(R_{m, n} \lt -x\right)}{\Phi(-x)}=1+o(1) \end{align}

uniformly for $ 0\leq x=o( (m\wedge n)^{1/6}).$ When $\kappa_{m,n}$ satisfies (3.7), by the definition of the upper $\left(1-\frac{\kappa_{m,n}}{2} \right)$-th quantile of standard normal distribution satisfies

\begin{equation*}\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2} \right)=-\Phi^{-1}\left(\frac{\kappa_{m,n}}{2} \right)=O\left(\sqrt{\left|\log \kappa_{m,n}\right|}\right), \end{equation*}

which is of order $o\left((m\wedge n)^{1/6}\right)$. By (3.12), we have

\begin{equation*} \mathbb{P}\bigg(-\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\leq R_{m, n}\leq\Phi^{-1}\left(1-\frac{\kappa_{m,n}}{2}\right)\bigg)\sim 1-\kappa_{m,n}, \end{equation*}

as $m\wedge n\rightarrow \infty$. This completes the proof of Proposition 3.1.

When $\{Z_{2,n}, n\geq 0\}$ is an independent copy of $\{Z_{1,n}, n\geq 0\}$, we can apply Theorems 2.2 and 2.3 to construct confidence intervals for $\sigma_1. $

Proposition 3.2. Assume H1 or H2 holds, let $ \kappa_{n,n} \in (0,1) $. Then $ [A_{n},\, B_n] ,$ with

\begin{equation*}A_{n}=\frac{(\log Z_{1,n} -\log Z_{2,n} )^2 }{2 n \chi_{1-\frac12\kappa_{n,n} }^2(1) } \ \ \ and \ \ \ B_{n}=\frac{(\log Z_{1,n} -\log Z_{2,n} )^2 }{2 n \chi_{\frac12\kappa_{n,n} }^2(1) } \end{equation*}

is a $ 1-\kappa_{n,n} $ confidence interval for σ 12 for sufficiently large n, where $\chi_{q }^2(1)$ the q-quantiles for chi-squared distribution with one degree of freedom.

Proof. Assume H1 holds. By Theorem 2.2, as $ n\rightarrow \infty$, we have

(3.13)\begin{align} \frac{\mathbb{P}\left(\frac{(\log Z_{1,n} -\log Z_{2,n} )^2 }{ 2 n \sigma_1^2 } \gt x\right)}{\mathbb{P}( \chi^2(1) \geq x)}=1+o(1) \end{align}

uniformly for $ 0\leq x=o (\sqrt{\log n } ).$ Then, applying the last equality to (3.13), we have, as $ n\rightarrow \infty $,

(3.14)\begin{align} \mathbb{P}\bigg( \chi_{\frac12 \kappa_{n,n} }^2(1) \leq \frac{(\log Z_{1,n} -\log Z_{2,n} )^2 }{2 n \sigma_1^2 } \leq \chi_{1-\frac12\kappa_{n,n} }^2(1)\bigg)\sim 1-\kappa_{n,n}, \end{align}

which implies $\sigma_1^2 \in[A_{n},B_n]$ with probability $1-\kappa_{n,n}$ for n large enough.

If H2 holds, analogous arguments apply. This completes the proof of Proposition 3.2.

3.2. Numerical simulation

We now present numerical simulations validating Theorems 2.12.3. Let $(X_{1,n,i})_{n\geq0,i\geq1}$ and $(X_{2,n,i})_{n\geq0,i\geq1}$ follow the distributions:

\begin{equation*} \mathbb P_{\xi}(X_{1,n,i}=k)= \begin{cases} \xi_{1,n} & \lt text \gt \lt /text \gt k = 1, \\ 1 - \xi_{1,n} & \lt text \gt \lt /text \gt k = 2, \end{cases} \end{equation*}
\begin{equation*} \mathbb P_{\xi}(X_{2,n,i}=k)= \begin{cases} \xi_{2,n} & \lt text \gt \lt /text \gt k = 1, \\ 1-\xi_{2,n} & \lt text \gt \lt /text \gt k = 2. \end{cases} \end{equation*}

Similarly, $(Y_{1,n})_{n\geq0}$ and $(Y_{2,n})_{n\geq0}$ follow Poisson distributions:

\begin{align*} \mathbb P_{\xi}(Y_{1,n}&=k)=\frac{{\lambda}^k(\xi_{1,n})e^{-\lambda(\xi_{1,n})}}{k!},\\ \mathbb P_{\xi}(Y_{2,n}&=k)=\frac{{\lambda}^k(\xi_{2,n})e^{-\lambda(\xi_{2,n})}}{k!} \end{align*}

where $\lambda(\xi_{1,n})=2{{\xi}_{1,n}}+1$ and $\lambda(\xi_{2,n})=3{{\xi}_{2,n}}+0.5$. $\xi_{1,n} \text{and }\xi_{2,n}$ follow uniform distributions $U(0,1)$ and $U(0,0.5)$, respectively. The computed parameters are $\mu_1 = 0.3863, \sigma_1^2 = 0.0391$, and $\mu_2 = 0.2781, \sigma_2^2 = 0.081$. In the theoretical proofs, we assume initial population sizes $Z_{1,0}=Z_{2,0}=1.$ for simplicity. However, any finite values of $Z_{1,0}, Z_{2,0}$ would not affect the theoretical conclusions. To obtain better simulation performance, we set $Z_{1,0}=Z_{2,0}=5$ and conducted numerical experiments with environmental correlation coefficients $\rho = 0, 0.5, \text{and }-0.5$. For the numerical verification of Theorems 2.2 and 2.3, we performed 3000 simulation trials with $m\wedge n= 50$ generations of offspring reproduction.

Figure 1 demonstrates the convergence of the empirical distribution of $R_{m,n}$ to the standard normal distribution. As $m\wedge n\rightarrow \infty$, the empirical cumulative distribution function approaches the theoretical normal curve across all tested ρ values, validating the CLT (Theorem 2.1).

Figure 2 illustrates the non-uniform Berry–Esseen bound (Theorem 2.2). The upper and lower bounds are demarcated by dashed lines. Specifically, the two dotted lines above and below correspond to $\Phi(x)-\frac{ C }{ (m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'}}$ and $\Phi(x)+\frac{ C }{(m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'} }$, respectively. The central solid curve represents the standard normal distribution function, while the discrete points within the solid region denote the simulation results.

Figure 3 verifies Cramér’s moderate deviations (Theorem 2.3). The upper and lower dashed lines represent the boundaries $C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }$ and $-C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }$, the middle blue solid line is the simulation result, and the red dashed line serves as the theoretical baseline.

Figures 13 confirm the validity of Theorems 2.12.3, as the simulations align closely with theoretical predictions.

Figure 1. Central limit theorem.

Figure 2. Non-uniform Berry–Esseen bounds.

Figure 3. Cramér’s moderate deviations.

3.2. Proof of Theorem 2.1

The following random walk related to the branching process will be used in our research. For $l= 1, 2,$

\begin{equation*} S_{l,0}=0,\ S_{l,n} =\sum_{i=1}^{n}\log m_{l,i-1},\ n\geq1, \end{equation*}

the random variables $\{\log m_{l,i-1}\}_{i\geq1}$ are independent and identically distributed, depending only on the environment ξ. Clearly

(3.15)\begin{align} \log Z_{l,n}=S_{l,n}+\log W_{l,n}, \end{align}

where $(W_{l, n})_{n\geq0}$ is non-negative submartingale under the annealed law $\mathbb{P}$, with respect to the natural filtration

\begin{equation*} \mathcal{F}_0=\sigma\{\xi_{1}, \xi_{2}\} ,\quad \mathcal{F}_n=\sigma\{\xi_{1}, \xi_{2}, Y_{1,k} ,X_{1, k,i}, Y_{2,k} , X_{2, k,i}, 0\leq k \leq n-1, i\geq 1\}, n\geq 1. \end{equation*}

Without loss of generality, we assume that $m \leq n.$ For the sake of simplicity in notation, in the sequel, denote

\begin{equation*}\eta_{m,n,i}= \frac{X_{1,i-1}-\mu_1}{n \, V_{m,n,\rho} \ } , \ \ \ i=1,\cdots, n, \ \ \ \ \textrm{and} \ \ \ \ \eta_{m,n,n+j}= -\frac{X_{2,j-1}-\mu_2}{m \, V_{m,n,\rho} \ } , \ \ \ j=1,\cdots, m. \end{equation*}

We can write $R_{m,n}$ the following form:

(3.16)\begin{equation} R_{m,n}=\sum_{i=1}^{n+m} \eta_{m,n, i} +\frac{\log W_{1, n}}{n\, V_{m,n,\rho} \ } - \frac{\log W_{2,m}}{m\, V_{m,n,\rho} \ } . \end{equation}

Let

\begin{equation*} N_i = \eta_{m,n, i} + \eta_{m,n, n+i} , \ \ \ i=1,\cdots, m, \ \ \textrm{and} \ \ \ \ N_i=\eta_{m,n, i}, \ \ i=m+1,\cdots, n. \end{equation*}

Then $(N_i)_{1 \leq i \leq n}$ is a finite sequence of centered and independent random variables and satisfies

(3.17)\begin{equation} \sum_{i=1}^n N_i = \sum_{i=1}^{n+m} \eta_{m,n, i} \ \ \ \textrm{and } \ \ \ \sum_{i=1}^n \mathbb{E}N_i^2 =1 . \end{equation}

Furthermore, it is easy to see

\begin{equation*}\textrm{Var}(N_i) \asymp \frac{1}{m} , \ \ \ i=1,\ldots, m, \ \ \textrm{and} \ \ \ \ \textrm{Var}(N_i)\asymp \frac{m}{n^2} , \ \ i=m+1,\ldots, n, \end{equation*}

as $m \rightarrow \infty.$

Below we will use the relationship between $ W_{l,n} $ and $ \tilde{W}_{l,n}$. The following lemma plays a crucial role, as it demonstrates that $ W_{l,n} $ almost surely converges.

Lemma 3.3. (cf. [Reference Wang and Liu26, Theorem 3.2 and Lemma 4.1])

Assume the condition (1.4) is satisfied, $ l=1,2. $ By the martingale convergence theorem, since the submartingale $ W_{l,n} $ is L 1 bounded under $ \mathbb{P} _{\xi,Y} $, then

(3.18)\begin{eqnarray} W_{l, \infty}=\lim_{n\rightarrow \infty} W_{l,n} \ \ \mathbb{P} - a.s. \end{eqnarray}

and it takes a value of $ \left [ 0,\infty \right ) $ and satisfies the following decomposition formula:

(3.19)\begin{eqnarray} W_{l,\infty} = \tilde {W} ^{\left ( \phi \right ) }_{l,\infty} + \sum_{k=1}^{n } \Pi _{k}^{-1} \sum_{i=1}^{Y_{k-1} } \hat{W} ^{\left ( 0_{k-1}i \right )}_{l,\infty} \ \ a.s.. \end{eqnarray}

Proof of Theorem 2.1

Without loss of generality, we assume that $m \leq n.$ Recall that $ 0 \lt \sigma_1, \sigma_2 \lt \infty, $ and equation (3.17). It is worth noting that

\begin{eqnarray*} V_{m,n,\rho} \asymp \frac{1 }{\sqrt{m }} \ \ \ \ \textrm{and} \ \ \ \ \max_{1\leq i \leq n}\textrm{Var}(N_i) \rightarrow 0,\ \ \ m \rightarrow \infty. \end{eqnarray*}

We begin with the decomposition formula (3.16). On the one hand, by the CLT for independent random variables, we have $ \sum_{i=1}^{n+m} \eta_{m,n, i}$ converges in distribution to the standard normal distribution as $m \rightarrow \infty$. On the other hand, recall that (3.18), we have $W_{l,n}$ converges to $W_{l, \infty}$ as $n \rightarrow \infty $, and it is known that $ \mathbb{E}W_{l,n} \lt \infty,$ almost surely. Moreover, since $ p_{0}=0,$ almost surely, the condition (1.5) and the decomposition formula (3.19), we can obtain $ W_{l,n} \ge \tilde{W}_{l,n} \gt 0, \ a.s. $ Therefore, we obtain in that

\begin{equation*}\frac{\log W_{1, n}}{n\, V_{m,n,\rho} \ } \to 0 \ \ and \ \ \frac{\log W_{2,m}}{m\, V_{m,n,\rho} \ } \to 0 \end{equation*}

as $m \rightarrow \infty$.

Combining the above results, we see that $R_{m,n}$ converges in distribution to the normal distribution. This completes the proof of Theorem 2.1.

4. Proof of Theorem 2.2

In the proof of Theorem 2.2, we require the following non-uniform Berry–Esseen bound derived by Bikelis [Reference Bikelis2]. For more general results, see Chen and Shao [Reference Chen and Shao4].

Lemma 4.1. Let $(X_{i})_{1\leq i \leq n}$ be independent random variables satisfying $\mathbb{E}X_{i}=0$ and $\mathbb{E}\left|X_{i}\right|^{2+\delta} \lt \infty$ for some positive constant $\delta \in(0,1]$ and all $1\leq i \leq n$. Assume that $\sum_{i=1}^{n}\mathbb{E}X_{i}^{2}=1$. Then, for all $x \in \mathbb{R},$

\begin{equation*}\left|\mathbb{P}\bigg(\sum_{i=1}^{n}X_{i}\leq x\bigg)-\Phi(x)\right|\leq \frac{C}{1+|x| ^{2+\delta}}\sum\limits_{i=1}^{n} \mathbb{E}\left|X_{i}\right|^{2+\delta}.\end{equation*}

Next we will explore the (conditional) Laplace transforms of $W_{1, \infty}$ and $W_{2, \infty}$, for all $t\geq 0$,

\begin{equation*}\phi_{i,\xi}(t)=\mathbb{E}_{\xi}e^{-tW_{i, \infty}}, \ \ \ \phi_i (t)=\mathbb{E}\phi_{i, \xi}(t)=\mathbb{E} e^{-tW_{i, \infty}},\end{equation*}
\begin{equation*} \tilde{\phi}_{i,\xi}(t)=\mathbb{E}_{\xi}e^{-t\tilde{W} _{i,\infty }^{\left ( \emptyset \right ) } } \quad\text{and}\quad \tilde{\phi}_{i}(t)=\mathbb{E} \tilde{\phi}_{i,\xi}(t)=\mathbb{E}e^{-t\tilde{W} _{i,\infty }^{\left ( \emptyset \right ) } }, \ \ \ i=1, 2. \end{equation*}

Since $ W _{i,\infty } \gt \tilde{W} _{i,\infty } \gt 0 $, it follows that $\phi_i (t) \le \tilde{\phi}_{i}(t)$. We have the following bounds for $\tilde{\phi}_{i}(t), i=1, 2$, as $t \rightarrow \infty.$

Lemma 4.2. Assume that conditions A1 and A2 are satisfied. Then for $i=1,2,$ it holds

\begin{equation*} \tilde{\phi}_{i}(t) \leq \frac{C}{1+ (\log^+ t )^{1+\delta}},\ \ t \rightarrow \infty.\end{equation*}

Here, we use results from Fan et al. [Reference Fan, Hu, Wu and Ye6]. In the earlier work of Grama et al. in [Reference Grama, Liu and Miqueu7](see theorem 3.1), they established an upper bound for $\tilde\phi(t)$, which states that $\tilde\phi(t)\le Ct^\alpha$ for t > 0, where α is a positive constant. This upper bound is superior to the one referenced in our Lemma 4.2. However, theorem 3.1 in Grama’s work requires condition A3, whereas our condition A1 is weaker. Therefore, we cannot directly apply the conclusions from Grama et al.’s work.

Next, we obtain the following results regarding the Lp moments of $\log W_{i, n}$ and $\log W_{i, \infty}$. Wang et al. [Reference Wang and Liu27, Lemma 3.2] have previously demonstrated this for the case of $q\in \left(1, 1+ \delta/2\right)$. Our results extend their findings to the range $q\in \left(1, 1+ \delta\right)$.

Lemma 4.3. Assume conditions A1and A2 are satisfied, and there is a constant ϵ > 0 such that $ \mathbb{E}\left ( \frac{Y_{i,0} }{m_{i,0}} \right ) ^{\epsilon } \lt \infty , i=1,2 $. Then, for $i=1, 2$ and $\ q \in (1, 1+\delta)$, the following two inequalities hold

(4.20)\begin{equation} \mathbb{E}|\log W_{i, \infty}|^q \lt \infty, \ \ \sup_{n\in\mathbb{N}}\mathbb{E}|\log W_{i,n}|^q \lt \infty. \end{equation}

Proof. Set $i=1, 2.$ We decompose $\mathbb{E}|\log W_{i, \infty}|^q$ as follows

(4.21)\begin{equation} \mathbb{E}|\log W_{i, \infty}|^q=\mathbb{E}|\log W_{i, \infty}|^q \mathbf{1}_{\{W_{i, \infty} \gt 1\}}+\mathbb{E}|\log W_{i, \infty}|^q \mathbf{1}_{\{W_{i, \infty}\leq1\}}. \end{equation}

For the first term in (4.21), it is crucial to note that there exists a constant C > 0 such that $\left|\log x\right|^q1_{\left \{x \gt 1 \right \} } \le Cx^{\epsilon}$ holds for any x > 0. Therefore, we have

(4.22)\begin{equation} \mathbb{E}|\log W_{i, \infty}|^q \mathbf{1}_{\{W_{i, \infty} \gt 1\}}\leq C\, \mathbb{E}W_{i, \infty}^{\epsilon } . \end{equation}

Observe that $p_{0} =0, a.s.$ and σ > 0 imply $m_{0} \gt 1$, thus $ \mathbb{E}m_{0}^{-\epsilon } \lt 1$. From the Fatou’s lemma and the work of Wang and Liu [Reference Wang and Liu27], we can deduce that under the conditions of Lemma 4.3, we have $ \mathbb{E}W_{i, \infty}^{\epsilon } \lt \infty . $ Thus,

(4.23)\begin{equation} \mathbb{E}|\log W_{i, \infty}|^q \mathbf{1}_{\{W_{i, \infty} \gt 1\}}\leq C\, \mathbb{E}W_{i, \infty}^{\epsilon } \lt \infty . \end{equation}

For the second term, by Markov’s inequality and $\phi_i (t) \le \tilde{\phi}_{i}(t)$, we have

(4.24)\begin{eqnarray} \mathbb{E}|\log W_{i, \infty}|^q\mathbf{1}_{\{W_{i, \infty}\leq1\}} &=& q \int_{\Omega }^{} \int_{1}^{\infty } \frac{1}{t} \left ( \log t \right )^{q-1} \mathbf{1}_{\left \{W_{i, \infty}\le t^{-1} \right \} }dtd\mathbb{P} \nonumber \\ &=& q\int_1^{\infty}\frac{1}{t}(\log t)^{q-1}\mathbb{P}(W_{i, \infty}\leq t^{-1})\, dt \nonumber \\ &\leq&\, q\, e\int_1^{\infty}\frac{\tilde{\phi}_i(t)}{t}( \log t)^{q-1}\, dt\nonumber\\ &=&q\, e\left(\int_1^{e}\frac{\tilde{\phi}_i(t)}{t}(\log t)^{q-1}\, dt+\int_e^{\infty}\frac{\tilde{\phi}_i(t)}{t}(\log t)^{q-1}\, dt\right). \end{eqnarray}

Clearly,

(4.25)\begin{equation} \int_1^{e}\frac{\widetilde{\phi}_i(t)}{t}(\log t)^{q-1}\, dt \lt \infty. \end{equation}

Based on Lemma 4.2 and $q \lt 1+\delta$, we can derive the following:

(4.26)\begin{equation} \int_e^{\infty}\frac{\tilde{\phi}_i(t)}{t}(\log t)^{q-1}\, dt\leq\, C \int_e^{\infty}\frac{1}{t(\log t)^{2+\delta-q}}\, dt \lt \infty. \end{equation}

Substituting (4.25) and (4.26) into (4.24), we obtain

(4.27)\begin{equation} \mathbb{E}|\log W_{i, \infty}|^q \mathbf{1}_{\{W_{i, \infty}\leq1\}} \lt \infty. \end{equation}

Therefore, by (4.21), (4.23), and (4.27), we obtain the first conclusion in (4.20).

Applying a similar truncation as $\mathbb{E}\left| \log W_{i, \infty}\right|^q$, we give a proof for the second conclusion in (4.20). Using the result in [Reference Wang and Liu26], we obtain

\begin{equation*}\sup_{n\in\mathbb{N}}\mathbb{E}\left|\log W_{i,n}\right|^q\mathbf{1}_{\{W_{i,n}\geq1\}}\le C\sup_{n\in\mathbb{N}}\mathbb{E}W_{i,n}^{\epsilon } \lt \infty. \end{equation*}

Since $x\mapsto\left| \log^q(x) \mathbf{1}_{\{x\leq1\}}\right|, q \gt 1,$ is a decreasing function, and we have $ W_{i,n} \ge \tilde{W} _{i,n} $, so

\begin{equation*}\sup_{n\in\mathbb{N}}\mathbb{E}\left|\log W_{i,n}\right|^q\mathbf{1}_{\{W_{i,n}\leq1\}} \lt \sup _{n\in\mathbb{N}} \mathbb{E}\left | \mathrm{\log}\tilde{W}_{i,n} \right | ^{q}\mathbf{1}_{\left \{\tilde{W }_{i,n} \le 1 \right \} } \lt \infty . \end{equation*}

For the last inequality, see [Reference Fan, Hu, Wu and Ye6]. Combining the above results, we see that $ \sup_{n\in \mathbb{N}}\mathbb{E}\left|\log W_{i,n}\right|^q \lt \infty.$ This completes the proof of Lemma 4.3.

Lemma 4.4. Assume that conditions A1, A2, and A3 are satisfied, then there exists a constant $\gamma\in(0,1)$, such that

\begin{equation*} \mathbb{E}|\log W_{1,n}-\log W_{1, \infty}| + \mathbb{E}|\log W_{2,m}-\log W_{2, \infty}|\leq C\, \gamma^{m \wedge n} .\end{equation*}

Proof. Let’s first prove the case when i = 1. Since $ \lt log_ \gt \lt /log_ \gt {W_{1,n+1} } - \lt log_ \gt \lt /log_ \gt {W_{1,n} }=\log\left ( 1+\eta _{1,n} \right ), $ where

\begin{equation*} \eta _{1,n} =\frac{1}{Z_{1,n}} \sum_{i=1}^{Z_{1,n}} \left ( \frac{X_{1,n,i} }{m_{1,n} }-1 \right ) +\frac{Y_{1,n}}{Z _{1,n}m_{1,n}}.\end{equation*}

Under $\mathbb{P_{\xi}}$, the sequence $ \left \{\frac{X_{1,n,i} }{m_{1,n} }-1 \right \}_{i\ge 1} $ consists of i.i.d. random variables with zero mean, independent from $\{Z_{1,n}\}$, and the sequence $ \left \{\frac{Y_{1,n}}{m_{1,n}} \right\}$ is also independent from $\{Z_{1,n}\}$. Choose $ p\in \left ( 1,2 \right ) $ such that A2 and A3 hold. Using the convexity inequality $ \left | x+y \right | ^{p} \le 2^{p-1} \left ( \left | x \right |^{p} + \left | y \right |^{p}\right )$ and Zygmund inequality, we get

(4.28)\begin{align} \mathbb{E} \left | \eta _{1,n} \right | ^{p} &\le 2^{p-1}\mathbb{E}\left ( \left | \frac{1}{Z_{1,n} } \sum_{i=1}^{Z_{1,n} } \left ( \frac{X_{1,n,i} }{m_{1,n} } -1 \right ) \right |^{p}\right ) +2^{p-1}\mathbb{E}\left ( Z_{1,n}^{-p}\left | \frac{Y_{1,n} }{m_{1,n} } \right | ^{p} \right)\nonumber\\ &\le 2^{2p-1} \mathbb{E}\left[\mathbb{E}_{\xi}\left({Z_{1,n} ^{1-p}}\right) \mathbb{E}_{\xi}\left(\left |\frac{X_{1,n,1} }{m_{1,n} } -1 \right |^{p}\right)\right]+2^{p-1} \mathbb{E}\left[\mathbb{E}_{\xi } \left(Z_{1,n}^{-p}\right)\mathbb{E}_{\xi }\left(\left | \frac{Y_{1,n} }{m_{1,n} } \right | ^{p}\right)\right]\nonumber\\ &\le 2^{2p-1} \mathbb{E}\left(Z_{1,n}^{1-p}\right)\mathbb{E}\left(\left | \frac{X_{1,0,1} }{m_{1,0} }-1 \right | ^{p}\right)+2^{p-1} \mathbb{E}\left(Z_{1,n}^{-p}\right)\mathbb{E}\left(\left | \frac{Y_{1,0} }{m_{1,0}} \right |^{p}\right). \end{align}

By Grama et al. [Reference Grama, Liu and Miqueu7], for p > 1, we have $ \mathbb{E}\tilde{Z} _{1,n} ^{1-p} \le \left ( \mathbb{E} \tilde{Z} _{1,1} ^{1-p} \right )^{n} $ because of $ Z_{1,n} \ge \tilde{Z} _{1,n} $ and p > 1, we can obtain

(4.29)\begin{equation} \mathbb{E}Z_{1,n} ^{1-p}\le \mathbb{E}\tilde{Z} _{1,n} ^{1-p} \le \left (\mathbb{E}\tilde{Z} _{1,1} ^{1-p} \right )^{n}=\left ( \mathbb{E}\left [ m_{1,0} \left ( 1-p \right ) \right ] \right )^{n}. \end{equation}

Bring (4.29) into (4.28), we can obtain

(4.30)\begin{equation} \left (\mathbb{E} \left | \eta _{1,n} \right | ^{p} \right ) ^\frac{1}{p} \le C_{1} \delta _{1}^{n}, \end{equation}

where $\delta _{1}=\left (\mathbb{E}\left [ m_{0} \left ( 1-p \right ) \right ] \right ) ^{1/p} \in \left ( 0,1 \right ),$

\begin{equation*} C_{1} =2\max \left \{\left ( \mathbb{E}\left | \frac{X_{1,0,1} }{m_{1,0} } \right |^{p} \right )^{1/p} ,\left (\mathbb{E}\left | \frac{Y_{1,0} }{m_{1,0} } \right |^{p} \right )^{1/p} \right \} \lt \infty . \end{equation*}

Fix $ M \in \left ( 0,1 \right ) $. By decomposition and standard truncation, we have

\begin{align*} \mathbb{E}\left | \lt log_ \gt \lt /log_ \gt {W_{1,n+1}} - \lt log_ \gt \lt /log_ \gt {W_{1,n}} \right | &=\mathbb{E}\left | \lt log_ \gt \lt /log_ \gt {(1+\eta _{1,n}})\right |\mathbf{1}_{\left (\eta _{1,n}\ge -M \right ) } \\ &\quad +\mathbb{E}\left | \lt log_ \gt \lt /log_ \gt {(1+\eta _{1,n}})\right |\mathbf{1}_{\left (\eta _{1,n} \lt -M \right ) } \\ &= : I_{n} +J_{n} \end{align*}

It is obvious that there exists a constant C > 0 such that for all $ x \gt -M, $ $ \left | \log(1 + x) \right | \le C\left | x \right | $. By (4.30), we get

(4.31)\begin{equation} I_{n} \le C \mathbb{E}\left | \eta _{1,n} \right | \le C\left ( \mathbb{E} \left | \eta _{1,n} \right | ^{p} \right ) ^\frac{1}{p} \le C_{1} \delta _{1}^{n} . \end{equation}

By Lemma 4.3, for any $ r \in(0, p)$ and under the conditions of Lemma 4.4, we have

\begin{equation*} \underset{n\in \mathbb{N} }{\sup} \mathbb{E}\left | \lt log_ \gt \lt /log_ \gt {\left ( 1+\eta _{1,n} \right ) } \right |^{r} \lt \infty . \end{equation*}

Let $ r,s \gt 1 $ satisfy $ \frac{1}{r}+\frac{1}{s} =1 $. By Hölder’s inequality and Markov’s inequality, we have

(4.32)\begin{align} J_{n} &\le \left ( \mathbb{E}\left | \lt log_ \gt \lt /log_ \gt {(1+\eta _{1,n} )} \right | ^{r} \right )^{1/r} (\mathbb{P}\left (\eta _{1,n} \lt -M\right )) ^{1/s} \nonumber\\ &\le C \left ( \mathbb{E} \left | \eta _{1,n} \right | ^{p} \right ) ^{1/s} \le C_{1}\delta _{1}^{n}. \end{align}

Combining with (4.31) and (4.32), we obtain

\begin{equation*}\mathbb{E}\left | \log W_{1,n+1}-\log W_{1,n} \right | \le C_{1}\delta _{1}^{n}. \end{equation*}

By the triangle inequality, for all $ k \in \mathbb{N} $, we have

\begin{equation*} \mathbb{E}\left | \log W_{1,n+k}- \log W_{1,n} \right | \le C_{1}\left ( \delta _{1}^{n} +\cdots + \delta _{1}^{n+k-1} \right ) \le \frac{C_{1}}{1-\delta _{1}}\delta _{1}^{n} . \end{equation*}

Letting $ k\to \infty $ and applying Fatou’s lemma, we obtain $ \mathbb{E}|\log W_{1,\infty}-\log W_{1,n}| \lt C_{1}\delta _{1}^{n}.$ Similar to the proof above, we can obtain

\begin{equation*} \mathbb{E}|\log W_{2,\infty}-\log W_{2, m}| \le C_{2}\delta _{2}^{m}. \end{equation*}

Then, we can obtain

\begin{eqnarray*} \mathbb{E}|\log W_{1,\infty}-\log W_{1,n}| +\mathbb{E}|\log W_{2,\infty}-\log W_{2, m}| \le C\gamma ^{m \wedge n} . \end{eqnarray*}

The following lemma plays a crucial role in the proof of Theorem 2.2.

Lemma 4.5. Assume that conditions A1, A2, and A3 are satisfied. Let $\delta'$ be a constant such that $\delta' \in (0, \delta).$ Then for all $ x \in \mathbb{R} $,

(4.33)\begin{equation} \mathbb{P}\bigg(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\geq x\bigg) \leq \frac{C}{(m \wedge n)^{\delta / 2}}\frac{1}{1+|x|^{1+\delta'\ }} \end{equation}

and

(4.34)\begin{equation} \mathbb{P}\bigg(R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\leq x\bigg) \leq \frac{C}{(m \wedge n)^{\delta / 2}}\frac{1}{1+|x|^{1+\delta' \ }}. \end{equation}

Proof. We prove only (4.33), the same method applies to (4.34). Without loss of generality, assume that $ m\leq n$. For all $x \in \mathbb{R},$ the following inequality holds,

(4.35)\begin{eqnarray} \mathbb{P}\bigg(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\geq x\bigg) \ \leq \ \mathbb{P}\bigg(R_{m,n} \leq x\bigg) \ \leq \ P_1 + P_2 , \end{eqnarray}

where

\begin{equation*} P_1 = \mathbb{P}\bigg( \frac{ \log Z_{1,n} -n \mu_1 }{n V_{m,n, \rho}} \leq \frac x 2 \bigg) \ \ \ \ \ \ \ \textrm{and} \ \ \ \ \ P_2 = \mathbb{P}\bigg( - \frac{ \log Z_{2,m} - m\mu_2 }{m V_{m,n, \rho}} \leq \frac x 2 \bigg).\end{equation*}

We have known that $Z_{1,n} \geq 1$ $\mathbb{P}$- almost surely and $ V_{m,n, \rho} \asymp m^{-1/2}$ as $m\rightarrow \infty$, for some positive constant C such that

\begin{equation*} \frac{ \log Z_{1,n} -n \mu_1 }{n V_{m,n, \rho}} \gt - \frac{ \mu_1 }{\ \ V_{m,n, \rho}} \gt - \frac12 C m^{1/2}\ \ \ \mathbb{P}\textrm{-a.s.}.\end{equation*}

First, we prove (4.33) when $x \leq - C m^{1/2}.$ From the inequality above, we deduce

\begin{equation*}\frac{ \log Z_{1,n} -n \mu_1 }{n V_{m,n, \rho}} \gt \frac x 2 , \end{equation*}

hence $P_1=0 $. For $P_2,$ note that

\begin{equation*} \log Z_{2,m} = \sum_{j=1}^{m} X_{2,j-1} + \log W_{2,m} , \end{equation*}

thus, by Lemma 4.1, Markov’s inequality, and $\mathbb{E} W_{2,m} \lt \infty $, we can obtain that for all $ x \leq -C m^{1/2},$

(4.36)\begin{align} P_2 &= \mathbb{P}\bigg(\sum_{j=1}^{m} \eta_{m,n,n+j} - \frac{\log W_{2,m}}{m\, V_{m,n,\rho} \ } \leq -\frac{|x|} 2 \bigg) \nonumber \\ &\leq \mathbb{P}\bigg(\sum_{j=1}^{m} \eta_{m,n,n+j} \leq -\frac{|x|} 4 \bigg) + \mathbb{P}\bigg(\frac{\log W_{2,m}}{m\, V_{m,n,\rho} \ } \geq \frac{|x|} 4 \bigg) \nonumber \\ &\leq \mathbb{P}\bigg(\sum_{j=1}^{m} \eta_{m,n,n+j} \leq - \frac{|x|} 4 \bigg) + \exp\bigg\{- \frac{|x|} 4 m\, V_{m,n,\rho} \bigg\} \mathbb{E} W_{2,m} \nonumber \\ &=: I_{1} +I_{2} . \end{align}

For I 1, since $ V_{m,n,\rho} \asymp m^{-1/2}$ as $m \rightarrow \infty$, by the inequality

(4.37)\begin{equation} \frac 1{\sqrt{2 \pi} ( 1+x) }e^{- x^2/2}\leq 1-\Phi \left( x\right) \leq \frac 1{\sqrt{\pi} ( 1+x) }e^{- x^2/2}, \ \ x \geq 0, \end{equation}

where

\begin{equation*}\left ( 1+ | x | \right ) e^{\frac{x^{2} }{2}} \ge C m^{\frac{\delta }{2}} \left ( 1+\left | x \right |^{2+\delta }\right ).\end{equation*}

Applying Lemma 4.1 , we can obtain

(4.38)\begin{eqnarray} I_{1} &=& 1-\mathbb{P}\bigg( - \sum_{j=1}^{m} \frac{\eta_{m,n,n+j} }{\sigma _{2}} \leq \frac{\left |x\right | }{4\sigma _{2}}\bigg) \nonumber \\ &\leq & 1-\Phi \left (\frac{\left | x \right |}{4\sigma _{2} } \right ) + \frac{C_1}{1+\frac{\left | x\right |^{2+\delta}}{\left (4\sigma _{2} \right )^{2+\delta}} } \sum_{j=1}^{m} \mathbb{E}\bigg( \frac{X_{2,j-1} - \mu_2 }{m V_{m,n, \rho}}\bigg) ^{2+\delta}\nonumber \\ &\leq & \frac{C_3}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}} + \frac{C_4}{1+|x| ^{2+\delta}} \Bigg( \frac{m}{m^{2+\delta} m^{-1 -\delta/2} }\Bigg) \nonumber \\ &\leq & \frac{C_5}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{eqnarray}

For I 1, we can easily obtain $ \exp\left \{-\frac{1}{4}m \right \} m^{\delta / 2}\left ( 1+Cm^{2+\delta} \right )= o \left ( 1 \right ) $ as $ m \rightarrow \infty.$ Thus,

\begin{equation*} I_{2} \leq \frac{C_6}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{equation*}

For the above reasons, we show that

(4.39)\begin{equation} P_2 \leq I_{1}+ I_{2}\leq \frac{C}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{equation}

Hence, inequality (4.33) holds for all $x \leq - C m^{1/2}. $

Next, we show that inequality (4.33) holds for all $x\geq C m^{1/2}$. By Lemma 4.1 and the inequality

\begin{equation*} (a + b)^{2+\delta} \leq 2^{1+\delta} (|a| ^{2+\delta} + | b| ^{2+\delta} ), \ a, b \in \mathbb{R}, \end{equation*}

we establish that for all $x\geq 0,$

\begin{align*} &\mathbb{P}\bigg(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\geq x\bigg) \ \ \leq \ \ 1-\Phi (x) \\ &\quad+ \ \frac{C_1}{1+|x| ^{2+\delta}} \Bigg( \sum_{i=1}^{m} \mathbb{E}|\eta_{m,n, i} + \eta_{m,n, n+i} |^{2+\delta} +\sum_{i=m+1}^{n} \mathbb{E}|\eta_{m,n, i} |^{2+\delta} \Bigg) \\ & \leq \ 1-\Phi (x) + \ \frac{C_1}{1+|x| ^{2+\delta}} \Bigg( \sum_{i=1}^{n} \mathbb{E}|\eta_{m,n, i} |^{2+\delta} +\sum_{i= 1}^{m } \mathbb{E}|\eta_{m,n, n+i} |^{2+\delta} \Bigg) \\ &\leq \frac{C_2}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}+ \frac{C_3}{1+|x| ^{2+\delta}} \Bigg( \frac{n}{n^{2+\delta} m^{-1 -\delta/2} } + \frac{m}{m^{2+\delta} m^{-1 -\delta/2} }\Bigg) \\ &\leq \frac{C_4}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{align*}

To complete the proof, we now show that (4.33) holds for $|x| \lt C m^{1/2}$. Consider the following notations, for all $0\leq k \leq m-1,$

\begin{align*} T_{m,n,k}&=\sum_{i=k+1}^{n} \eta_{m,n, i}+\sum_{j=k+1}^{m} \eta_{m,n, n+j},\quad \tilde{T}_{m,n, k}=T_{m,n,0}- T_{m,n,k} ,\\ H_{m,n,k}&=\frac{\log W_{1, k}}{n\, V_{m,n,\rho} \ } - \frac{\log W_{2,k}}{m\, V_{m,n,\rho} \ } \quad\ \text{and}\quad\ D_{m, n , k}=\frac{\log W_{1, n}}{n\, V_{m,n,\rho} \ } - \frac{\log W_{2,m}}{m\, V_{m,n,\rho} \ }-H_{m,n,k}. \end{align*}

Let $ \alpha_{m}= m^{-\delta/2}$ and $k=[ m^{1-\delta/2} \,]$, where $ [t]$ denotes the largest integer less than t. From equation (3.16), we deduce that for all $ x \in \mathbb{R}$,

(4.40)\begin{align} \mathbb{P}\bigg(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\geq x\bigg) &\leq \mathbb{P}\bigg(T_{m,n,0}+H_{m,n,k}\leq x+ \alpha_{m}, T_{m,n,0}\geq x\bigg) \nonumber \\ &\quad + \ \mathbb{P}\bigg(|D_{m, n , k}|\geq \alpha_{m}\bigg). \end{align}

We first provide an estimation for the first term on the RHS of (4.40). Let

\begin{equation*}G_{m, n,k}(x)=\mathbb{P}\left(T_{m, n,k} \leq x\right)\ \text{and } \ v_{k}(d s, d t)=\mathbb{P}\left(\tilde{T}_{m,n, k} \in d s, H_{m,n,k} \in d t\right).\end{equation*}

Due to the independence between $ T_{m, n,k} $ and $(\tilde{T}_{m,n, k},H_{m,n,k}) , $ we have

(4.41)\begin{align} &\mathbb{P}\Big(T_{m,n,0}+H_{m,n,k} \leq x+\alpha_{m}, T_{m,n,0} \geq x\Big)\nonumber \\ &= \int\!\!\int \mathbf{1}_{\{t \leq \alpha_{m}\}}\Big(G_{m, n,k}\left(x-s-t+\alpha_{m}\right)-G_{m, n,k}(x-s)\Big) v_{k}(ds, dt) \nonumber\\ & =\int\!\!\int \mathbf{1}_{\{t \leq \alpha_{m}\}}\Big(G_{m,n,k}\left(x-s-t+\alpha_{m}\right)-\Phi\left(x-s-t+\alpha_{m}\right)\Big) v_{k}(ds, dt) \nonumber\\ &\quad - \int\!\!\int \mathbf{1}_{\{t \leq \alpha_{m}\}}\Big(G_{m, n,k}(x-s)-\Phi\left(x-s\right)\Big) v_{k}(ds, dt) \nonumber\\ &\quad + \int\!\!\int \mathbf{1}_{\{t \leq \alpha_{m}\}} \Big(\Phi\left(x-s-t+\alpha_{m}\right)- \Phi\left(x-s\right) v_{k}(ds, dt)\Big). \end{align}

Denote $C_{m,n,k}^2= \textrm{Var} (T_{m,n,k}), $ then it holds $ C_{m,n,k}= 1 + O(k/n)\nearrow 1 $ as $ m \rightarrow \infty.$ By Lemma 4.1, for all $x \in \mathbb{R},$ we have

(4.42)\begin{align} & \left|\mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} } \leq \frac{x}{C_{m,n,k}}\right)-\Phi\left(\frac{x}{C_{m,n,k}}\right)\right| \nonumber \\ & \leq \frac{C_2}{1+|x |^{2+\delta}}\left( \sum_{i=k+1}^{n}\mathbb{E}\left|\frac{X_{1,i-1}-\mu_1}{nV_{m,n,\rho} }\right|^{2+\delta} + \sum_{j=k+1}^{m}\mathbb{E}\left|\frac{X_{2,j-1}-\mu_2}{mV_{m,n,\rho} }\right|^{2+\delta}\right) \nonumber \\ & \leq \frac{C_{4} }{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{align}

By the mean value theorem, for all $x \in \mathbb{R},$

(4.43)\begin{eqnarray} \left |\Phi \left ( \frac{x}{C_{m,n,k}} \right ) -\Phi \left ( x \right ) \right | \leq x \exp\left \{{-\frac{x^{2} }{2} } \right \} \left | \frac{1}{C_{m,n,k}} -1\right | \leq \frac{C }{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{eqnarray}

Combining (4.42) and (4.43), we deduce that for all $x \in \mathbb{R},$

(4.44)\begin{eqnarray} \left|G_{m, n,k}(x)-\Phi(x)\right| \leq \frac{C}{m^{\delta / 2}} \frac{1}{1+|x|^{2+\delta}}. \end{eqnarray}

Therefore, we have for all $x \in \mathbb{R},$

(4.45)\begin{eqnarray} \mathbb{P}\Big(T_{m,n,0}+H_{m,n,k} \leqslant x+\alpha_{m}, T_{m,n,0} \geqslant x\Big) \leq J_{1}+J_{2} +J_3, \end{eqnarray}

where

\begin{align*}J_{1}&=\int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}}\left|\Phi\left(x-s-t+\alpha_{m}\right)-\Phi(x-s)\right| v_{k}(d s, d t),\\ J_{2}&=\frac{C}{m^{\delta / 2}} \int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}} \frac{1}{1+|x-s|^{2+\delta}} v_{k}(d s, d t) \end{align*}

and

\begin{equation*} J_{3}=\frac{C}{m^{\delta / 2}}\int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}} \frac{1}{1+|x-s-t|^{2+\delta}} v_{k}(d s, d t). \end{equation*}

For J 1, we have for all $x \in \mathbb{R},$

\begin{eqnarray*} & & \Phi {\left ( \xi \right ) }' \mathbf{1}_{\{|s| \lt 1+ \frac{1}{4 }|x| \}}\mathbf{1}_{\{|t| \leq 1+ \frac{1}{4 }|x| \}} \leq \Phi {\left ( \xi \right ) }' \mathbf{1}_{\{|s| \lt 1+ \frac{1}{4 }|x| \}} \leq C\exp\left\{-\frac{x^{2}}{8}\right\}. \end{eqnarray*}

Then by the mean value theorem, we have for all $x \in \mathbb{R},$

\begin{eqnarray*} & & \mathbf{1}_{\{t \leqslant \alpha_{m}\}}\left|\Phi\left(x-s-t+\alpha_{m}\right)-\Phi(x-s)\right| \leq |\alpha_{m}-t |\Phi' {\left ( \xi \right ) } \\ && \ \ \ \ \ \leq |\alpha_{m}-t |[\mathbf{1}_{\{|s| \geq 1+ \frac{1}{4 }|x| \}} + \Phi' {\left ( \xi \right ) } \mathbf{1}_{\{|t| \geq 1+\frac{1}{4 }|x| \}}+\Phi' {\left ( \xi \right ) } \mathbf{1}_{\{|s| \lt 1+ \frac{1}{4 }|x|, |t| \leq 1+ \frac{1}{4 }|x|\}}] \\ &&\ \ \ \ \ \leq |\alpha_{m}-t |[C\exp\left\{-\frac{x^{2}}{8}\right\} + \mathbf{1}_{\{|s| \geq 1+ \frac{1}{4 }|x| \}} +\mathbf{1}_{\{|t| \geq 1+ \frac{1}{4 }|x| \}}], \end{eqnarray*}

thus

(4.46)\begin{eqnarray} J_{1}\leqslant J_{11}+ J_{12}+ J_{13}, \end{eqnarray}

where

\begin{equation*}J_{11}= C\int\!\!\int |\alpha_{m}-t |\exp\left\{-\frac{x^{2}}{8}\right\} v_{k}(d s, d t), \ \ J_{12}= \int\!\!\int |\alpha_{m}-t |\mathbf{1}_{\{|s| \geq 1+ \frac{1}{4 }|x| \}} v_{k}(d s, d t) \end{equation*}

and

\begin{eqnarray*} J_{13}= \int\!\!\int |\alpha_{m}-t |\mathbf{1}_{\{|t| \geq 1+ \frac{1}{4 }|x| \}} v_{k}(d s, d t). \nonumber \end{eqnarray*}

Based on Lemma 4.3, it is evident that for all $x \in \mathbb{R},$

(4.47)\begin{eqnarray} J_{11} \leq C \exp\left\{-\frac{x^{2}}{8}\right\} \bigg(\alpha_{m} + \mathbb{E}| H_{m,n,k}| \bigg) \leq\frac{C_2}{m^{\delta/ 2}} \frac{1}{1+|x|^{2+\delta}}. \end{eqnarray}

For $J_{12},$ we can make the following estimation, for all $x \in \mathbb{R},$

\begin{eqnarray*} J_{12} & \leq & \alpha_{m} \mathbb{P}\bigg( |\tilde{T}_{m,n, k} | \geq 1+ \frac{1}{4 }|x| \bigg) + \mathbb{E}| H_{m,n,k}|\mathbf{1}_{\{|\tilde{T}_{m,n, k} | \geq 1+ \frac{1}{4 }|x| \}} . \nonumber \end{eqnarray*}

Denote $\tilde{C}_{m,n,k}^2= \textrm{Var}(\tilde{T}_{m,n, k} )$, then we can establish that $ \tilde{C}_{m,n,k}^2\asymp \frac{1}{m^{\delta/2}}.$ Now, let $\delta' \in (0, \delta)$. Applying Lemma 4.1, we can conclude that for all $x \in \mathbb{R},$

(4.48)\begin{align} \mathbb{P}\bigg( |\tilde{T}_{m,n, k} | \geq 1+ \frac{1}{4 }|x| \bigg) &\leq 1-2\Phi\left(\frac{1+|x|/4}{\tilde{C}_{m,n,k} }\right) + \frac{C}{\Big| \frac{1+|x|/4}{\tilde{C}_{m,n,k} }\Big|^{2+\delta}} \sum_{i=1}^{k}\mathbb{E} \Big| \frac{\eta_{m,n, i}+ \eta_{m,n, n+i}}{\tilde{C}_{m,n,k}} \Big|^{2+\delta} \nonumber\\ &\leq \frac{C_2}{1+|x|^{2+\delta}} \frac{1 }{m^{\delta }}. \end{align}

Let $\tau= 1+ \frac{\delta + \delta'}{2+2\delta-\delta'}.$ We have the following relationship:

\begin{align*} \mathbb{E}\left | H_{m,n,k} \right | ^{\tau } =\mathbb{E}\left | \frac{\log W_{1,k} }{nV_{m,n,\rho}} -\frac{\log W_{2,k}}{mV_{m,n,\rho}} \right |^{\tau } \le 2^{\tau}\left ( \mathbb{E}\left | \frac{\log W_{1,k} }{nV_{m,n,\rho}} \right |^{\tau } +\mathbb{E}\left | \frac{\log W_{2,k} }{nV_{m,n,\rho}} \right |^{\tau }\right ) . \end{align*}

Applying Lemma 4.3,we have

\begin{equation*} 2^{\tau}\mathbb{E}\left | \frac{\log W_{1,k} }{nV_{m,n,\rho}} \right |^{\tau } \le\frac{C_{1}}{m^{1/2} } . \end{equation*}

Thus

\begin{equation*} \left ( \mathbb{E}\left | H_{m,n,k} \right | ^{\tau } \right )^{1/{\tau }} \le \left ( \frac{C_{3}}{m^{1/2} } \right )^{1/{\tau }} \le \frac{C}{m^{1/2} }. \end{equation*}

Using Hölder’s inequality and making ι satisfying $\frac{1}{\tau} + \frac{1}{\iota}=1$, we have

(4.49)\begin{align} \mathbb{E}| H_{m,n,k}|\mathbf{1}_{\{|\tilde{T}_{m,n, k} | \geq 1+ \frac{1}{4 }|x| \}} & \leq \Big(\mathbb{E}| H_{m,n,k}|^ \tau \Big)^{1/\tau} \Big(\mathbb{P}\big( |\tilde{T}_{m,n, k} | \geq 1+ \frac{1}{4 }|x| \big)\Big) ^{1/\iota} \nonumber \\ &\leq \frac{C }{m^{\delta / 2}} \frac{1}{1+|x|^{1+\delta' \ }}. \end{align}

Combining inequalities (4.48) and (4.49), we have for all $|x| \leq C m^{1/2},$

(4.50)\begin{align} J_{12} \leq \frac{C_3}{m^{\delta / 2}} \frac{1}{1+|x|^{1+\delta'}}. \end{align}

For $J_{13},$ we have for all $x \in \mathbb{R},$

\begin{align*} J_{13} \leq \alpha_{m} \mathbb{P}\bigg( |H_{m,n,k}| \geq 1+ \frac{1}{4 }|x| \bigg) + \mathbb{E}| H_{m,n,k}|\mathbf{1}_{\{|H_{m,n,k} | \geq 1+ \frac{1}{4 }|x| \}} . \nonumber \end{align*}

Let $p' = 1 + \delta/2$, by Markov’s inequality and Lemma 4.3, for all $|x| \leq C m^{1/2},$ we have

(4.51)\begin{align} \mathbb{P}\bigg( |H_{m,n,k} | \geq 1+ \frac{1}{4 }|x| \bigg) & \leq \frac{4^{p'}}{1+ |x|^{p'}} \mathbb{E} |H_{m,n,k} |^{p'} \ \nonumber \\ & \leq \frac{C}{1+ |x|^{p'}} \frac{1}{m^{p'/2}} \leq\frac{C}{1+ |x|^{2+\delta}} , \end{align}

and, similarly to (4.51) with $p'' =\frac{1}{2}(\delta+\delta') ,$

\begin{eqnarray*} \mathbb{E}| H_{m,n,k}|\mathbf{1}_{\{|H_{m,n,k} | \geq 1+ \frac{1}{4 }|x| \}} \leq \frac{C_4}{m^{\delta / 2}} \frac{1}{1+|x|^{1+\delta'}} . \nonumber \end{eqnarray*}

Hence, we have for all $|x| \leq C m^{1/2}, $

(4.52)\begin{equation} J_{13} \leq\frac{C}{m^{\delta / 2}} \frac{1}{1+|x|^{1+\delta'}} . \end{equation}

Substituting (4.47), (4.50), and (4.52) into (4.46), for all $|x| \leq C m^{1/2},$ we conclude

(4.53)\begin{eqnarray} J_{1} \leq \frac{C}{m^{\delta / 2}} \frac{1}{1+|x|^{1+\delta'}\ } . \end{eqnarray}

Next, we consider J 2. By an argument similar to the proof of (4.48), we can conclude that for all $|x| \leq C m^{1/2}, $

(4.54)\begin{align} J_{2} &\leq\frac{C_1}{m^{\delta / 2}}\left(\int_{|s| \lt 1+ |x|/2}\frac{1}{1+|x-s|^{2+\delta}} v_{k}(d s)+\int_{|s| \geq 1+ |x|/2}\frac{1}{1+|x-s|^{2+\delta}} v_{k}(d s)\right) \nonumber \\ &\leq\frac{C_3}{m^{\delta / 2}}\left[ \frac{1}{1+|x/2|^{2+\delta}}+\mathbb{P}\bigg( \Big| \frac{\tilde{T}_{m,n,k} }{\tilde{C}_{m,n,k} } \Big| \gt \frac{1+|x|/2}{\tilde{C}_{m,n,k} } \bigg)\right]\nonumber \\ &\leq \frac{C_4}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{align}

For J 3, using arguments similar to those in (4.48) and (4.51), we obtain for all $ |x| \leq C m^{1/2}, $

(4.55)\begin{align} &J_{3} \leq\frac{C_1}{m^{\delta / 2}}\left(\int\!\!\int_{|s+t|\leqslant 2+ |x|/2}\frac{1}{1+|x/2|^{2+\delta}} v_{k}(d s, d t) \right. \nonumber\\ &\quad \left.+\int\!\!\int_{|s| \gt 1+ |x|/4} v_{k}(d s, d t)+ \int\!\!\int_{|t| \gt 1+ |x|/4} v_{k}(d s, d t) \right) \nonumber \\ &\leq \frac{C_2}{m^{\delta / 2}}\left[ \frac{1}{1+|x/2|^{2+\delta}}+\mathbb{P}\bigg( \Big| \frac{\tilde{T}_{m,n,k} }{\tilde{C}_{m,n,k} } \Big| \gt \frac{1+|x|/4}{\tilde{C}_{m,n,k} } \bigg)+ \mathbb{P}\bigg( |H_{m,n,k}| \gt 1+ \frac{|x|}{4}\bigg) \right]\nonumber \\ &\leq \frac{C_3}{m^{\delta / 2}}\frac{1}{1+|x|^{2+\delta}}. \end{align}

Substituting (4.53)–(4.55) into (4.45), for all $|x| \leq C m^{1/2},$ we have

(4.56)\begin{eqnarray} \mathbb{P}\Big(T_{m,n,0}+H_{m,n,k}\leq x+ \alpha_{m}, T_{m,n,0}\geq x\Big) \leq\frac{C}{m^{\delta / 2}}\frac{1}{1+|x|^{1+\delta'}}. \end{eqnarray}

We now bound the tail probability $\mathbb{P}\left(|D_{m, n , k}|\geq \alpha_{m}\right)$. By Markov’s inequality and Lemma 4.4, there exists a constant $\gamma \in (0,1)$ such that for all $ - m \lt x \lt m$,

(4.57)\begin{align} \mathbb{P}\left(\left|D_{m, n, k}\right| \gt \alpha_{m}\right) & \leq \frac{\mathbb{E}\left|D_{m, n,k}\right|}{\alpha_{m}}\nonumber\\ &\leq\frac{m^{\delta/2} }{V_{m,n,\rho}}\Bigg( \mathbb{E} \left|\frac{\log W_{1, n}}{n}-\frac{\log W_{1, \infty}}{n}\right| + \mathbb{E} \left|\frac{\log W_{2,m}}{m}-\frac{\log W_{2,\infty}}{m}\right| \nonumber\\ &\quad + \mathbb{E} \left|\frac{\log W_{1, k}}{n}-\frac{\log W_{1, \infty}}{n}\right| + \mathbb{E} \left|\frac{\log W_{2,k}}{m}-\frac{\log W_{2, \infty}}{m}\right| \Bigg) \nonumber\\ & \leq C_1 \, m^{(1+ \delta)/2 } \bigg( \frac1n \gamma ^{n} + \frac1m \gamma ^{n}+ \frac1n \gamma ^{k} + \frac1m \gamma ^{k} \bigg) \nonumber \\ & \leq C_2 \, m^{\delta-1/2 } \left ( \gamma ^{n} + \gamma ^{k } \right ) \nonumber \\ & \leq \frac{C }{m^{\delta / 2}} \frac{1}{1+|x|^{2+\delta}}. \end{align}

The last inequality follows because $ m^{\delta-1 }\left ( 1+m ^{2+\delta }\right ) \gamma ^{m} =o\left ( 1 \right ) ,m\rightarrow \infty. $ Combining (4.40), (4.56) and (4.57), we conclude that (4.33) for all $|x| \leq C m^{1/2}. $ This completes the proof of Lemma 4.5. $\hfill\square$

Proof of Theorem 2.2

Notice that

(4.58)\begin{align} \mathbb{P}\left(R_{m,n} \leq x\right) & = \mathbb{P}\Big( \sum_{i=1}^{n+m} \eta_{m,n, i} \leq x\Big) -\mathbb{P}\Big(R_{m,n} \gt x, \sum_{i=1}^{n+m} \eta_{m,n, i} \leq x\Big)\nonumber\\ &\quad +\ \mathbb{P}\Big(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \gt x\Big). \end{align}

By Lemma 4.1 and the fact that $V_{m,n, \rho} \asymp \sqrt{m^{-1 } + n^{-1 } } $, we can establish the following result for all $x \in \mathbb{R},$

(4.59)\begin{align} \left|\mathbb{P}\bigg(\sum_{i=1}^{n+m} \eta_{m,n, i} \leq x\bigg)-\Phi(x)\right| &\leq \frac{C_1}{1+|x| ^{2+\delta}} \Bigg( \sum_{i=1}^{n} \mathbb{E}|\eta_{m,n, i} |^{2+\delta} +\sum_{j=1}^{m} \mathbb{E}|\eta_{m,n, n+j} |^{2+\delta} \Bigg) \nonumber \\ &\leq \frac{C_2}{1+|x| ^{2+\delta}} \Bigg( \frac{n}{n^{2+\delta} \, ( \frac1n+ \frac1m )^{(2+\delta)/2} } + \frac{m}{m^{2+\delta} \, ( \frac1n+ \frac1m )^{(2+\delta)/2} } \Bigg) \nonumber \\ &\leq \frac{C}{(m \wedge n)^{\delta/2}} \frac{1}{1+|x| ^{2+\delta}} . \end{align}

Combining (4.59) with Lemma 4.5 and substituting into (4.58), for all $x \in \mathbb{R},$ we obtain

(4.60)\begin{align} \Big|\mathbb{P}\left(R_{m,n} \leq x\right) -\Phi(x)\Big| &\leq |\mathbb{P}\Big( \sum_{i=1}^{n+m} \eta_{m,n, i} \leq x \Big) -\Phi(x) | + \mathbb{P}\Big(R_{m,n} \gt x, \sum_{i=1}^{n+m} \eta_{m,n, i} \leq x\Big) \nonumber\\ &\quad + \mathbb{P}\Big(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \gt x\Big) \nonumber\\ &\leq \frac{C}{(m \wedge n)^{\delta/2}} \frac{1}{1+|x| ^{1+\delta'}}. \end{align}

This completes the proof of Theorem 2.2.

5. Proof of Theorem 2.3

To prove Theorem 2.3, we first establish the existence of a positive-order α > 0 harmonic moment concerning the BPIRE for both $W_{1, n}$ and $W_{2, m}$. Additionally, we will make use of the lemma in [Reference Fan, Hu, Wu and Ye6].

Lemma 5.1. Assume A3, A4, and A5 hold. There exists a constant $ a_0 \gt 0$ such that for all $\alpha \in (0, a_0),$ the following inequalities hold

(5.61)\begin{eqnarray} \mathbb{E}W_{1, \infty}^{-\alpha} + \mathbb{E}W_{2, \infty}^{-\alpha} \lt \infty \end{eqnarray}

and

(5.62)\begin{equation} \sup_{n\in\mathbb{N}} \big( \mathbb{E}W_{1, n}^{-\alpha } + \mathbb{E}W_{2, n}^{-\alpha } \big) \lt \infty. \end{equation}

Proof. Let $i=1, 2.$ By the fact that

\begin{equation*}W_{i, \infty}^{-\alpha}=\frac{1}{\Gamma\left(\alpha\right)} \int_{0}^{\infty} e^{-tW_{i, \infty} } t^{\alpha-1} d t,\end{equation*}

we obtain

(5.63)\begin{align} \mathbb{E}W_{i, \infty}^{-\alpha} &=\frac{1}{\Gamma\left(\alpha\right)} \int_{0}^{\infty} \phi_i(t) t^{\alpha-1} dt\nonumber\\ &=\frac{1}{\Gamma\left(\alpha\right)}\left( \int_{0}^{1} \phi_i(t) t^{\alpha-1} dt+ \int_{1}^{\infty} \phi_i(t) t^{\alpha-1} dt\right), \end{align}

where Γ is the gamma function.

Since $0\leq \phi_i(t)\leq 1 \text{for } t \geq 0$, the first term in (5.63) satisfies for any α > 0,

(5.64)\begin{equation} \int_{0}^{1} \phi_i(t) t^{\alpha-1} dt\leq \int_{0}^{1} t^{\alpha-1} dt \lt \infty. \end{equation}

For the second term in (5.63), by (4.56) in [Reference Fan, Hu, Wu and Ye6] and lemma 4.1 in [Reference Liu20], if $0 \lt \alpha \lt a_{0}$ we have

(5.65)\begin{align} \int_{1}^{\infty} \phi_i(t) t^{\alpha-1} dt & \leq \int_{1}^{\infty} \tilde\phi_i(t) t^{\alpha-1} dt \nonumber\\ & \leq C \int_{1}^{\infty} t^{\alpha-a_{0}-1} dt \lt \infty. \end{align}

Combining (5.63), (5.64), and (5.65), we conclude that (5.61) holds.

Now, we prove inequality (5.62). Note that the function $x\mapsto x^{-\alpha}\ (\alpha \gt 0,\ x \gt 0)$ is non-negative convex. Then by lemma 2.1 in [Reference Huang and Liu9], we have

\begin{equation*}\sup_{n\in\mathbb{N}}\mathbb{E} W_{i, n}^{-\alpha}=\mathbb{E} W_{i, \infty}^{-\alpha} \lt \infty.\end{equation*}

This completes the proof of Lemma 5.1.

Lemma 5.2. Assume A3, A4, and A5 hold. Then for all $ |x| \leq \sqrt{\log (m \wedge n)} ,$

(5.66)\begin{equation} \mathbb{P}\bigg(R_{m,n} \leq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\geq x\bigg) \leq C \frac{1+ x^2 }{\sqrt{m \wedge n} \ } \exp\Big\{- \frac12 x^2 \Big\} \end{equation}

and

(5.67)\begin{equation} \mathbb{P}\bigg(R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i}\leq x\bigg) \leq C \frac{1+ x^2 }{\sqrt{m \wedge n} \ } \exp\Big\{- \frac12 x^2 \Big\}. \end{equation}

Proof. Since A4 and A5 imply A1 and A2, inequalities (5.66) and (5.67) follow directly from Lemma 4.5 for $ |x| \leq 1.$ Therefore, we need to establish the inequalities $ 1 \leq |x| \leq \sqrt{\log (m \wedge n)} .$ Additionally, we shall only present a proof for (5.66) with $ 1 \leq |x| \leq \sqrt{\log (m \wedge n)}$, as the proof for (5.67) follows a similar approach.

Without loss of generality, we assume that $ m\leq n$. For $|x| \leq m^{1/6},$ using Cramér’s moderate deviations for independent random variables, we derive

(5.68)\begin{eqnarray} &&\left|\mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} } \leq \frac{x}{C_{m,n,k}}\right)-\Phi\left(\frac{x}{C_{m,n,k}}\right)\right|\nonumber \\ && \leq \left | -\left ( 1-\Phi\left(\frac{x}{C_{m,n,k}}\right) \right )\left ( \exp \left \{-C\frac{1+x^{3} }{\sqrt{m} } \right \}-1 \right ) \right | \nonumber \\ && \leq \frac{C\left ( 1+x^{2} -x \right ) }{\sqrt{\pi m } } \exp\left \{-\frac{x^{2} }{2} \right \} \nonumber \\ && \leq C_1 \frac{1+ |x|^2 }{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{eqnarray}

For $|x| \gt m^{1/6}$, by Bernstein’s inequality for independent random variables,

(5.69)\begin{align} \left|\mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} } \leq \frac{x}{C_{m,n,k}}\right)-\Phi\left(\frac{x}{C_{m,n,k}}\right)\right | &\leq \mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} } \gt \frac{x}{C_{m,n,k}}\right)+1 -\Phi\left(\frac{x}{C_{m,n,k}}\right) \nonumber \\ &\leq C_2 \frac{1+ |x|^2 }{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{align}

Combing (5.68) and (5.69), we obtain

\begin{eqnarray*} \left|\mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} } \leq \frac{x}{C_{m,n,k}}\right)-\Phi\left(\frac{x}{C_{m,n,k}}\right)\right| \leq C\frac{1+ |x|^2 }{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{eqnarray*}

From the last inequality, for all $ x \in \mathbb{R} ,$ we deduce

\begin{align*} \left|G_{m, n,k}(x)-\Phi(x)\right| &\leq \left|\mathbb{P}\left( \frac{T_{m,n,k} }{C_{m,n,k} }\leq \frac{x}{C_{m,n,k}}\right)-\Phi\left(\frac{x}{C_{m,n,k}}\right)\right| + \left|\Phi\left(\frac{x}{C_{m,n,k}}\right)-\Phi\left(x\right)\right|\nonumber\\ &\leq C_1 \frac{1+ x^2 }{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} +\exp\bigg\{-\frac{x^{2}}{2 C_{m,n,k}^2 }\bigg\}\left|\frac{x}{C_{m,n,\rho}}-x\right|\nonumber\\ &\leq C_3 \frac{1+ x^2 }{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} . \nonumber \end{align*}

Therefore, we have for all $x \in \mathbb{R},$

(5.70)\begin{eqnarray} \mathbb{P}\left(T_{m,n,0}+H_{m,n,k} \leqslant x+\alpha_{m}, Y_{m,n,0} \geqslant x\right) \leq J_{1}+J_{2} +J_3, \end{eqnarray}

where

\begin{equation*}J_{1}=\int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}}\Big|\Phi\left(x-s-t+\alpha_{m}\right)-\Phi(x-s)\Big| v_{k}(d s, d t),\end{equation*}
\begin{equation*} J_{2}= C \int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}} \frac{1+ |x-s|^2 }{\sqrt{m} } \exp\bigg\{ - \frac{(x-s)^2 }{2 \big( 1+ \frac{C}{\sqrt{m}} |x-s| \big) } \bigg\} v_{k}(d s, d t)\end{equation*}

and

\begin{equation*} J_{3}= C \int\!\!\int \mathbf{1}_{\{t \leqslant \alpha_{m}\}} \frac{1+ |x-s-t|^2 }{\sqrt{m} } \exp\bigg\{ - \frac{(x-s-t)^2 }{ 2 \big(1+ \frac{C}{\sqrt{m}} |x-s-t| \big)} \bigg\} v_{k}(d s, d t) . \end{equation*}

Denote $\tilde{C}_{m,n,k}^2= \textrm{Var}(\tilde{Y}_{m,n, k} )$, then it holds $\tilde{C}_{m,n,k}^2=O(1/\sqrt{m})$ as $m\rightarrow \infty$. By the mean value theorem, the upper bound of J 1 satisfies for $1\leq |x| \leq \sqrt{\log m},$

\begin{eqnarray*} & & \mathbf{1}_{\{t \leqslant \alpha_{m}\}}\left|\Phi\left(x-s-t+\alpha_{m}\right)-\Phi(x-s)\right| \leq |\alpha_{m}-t |\Phi {\left ( \xi \right ) }' \\ && \ \ \ \ \ \leq |\alpha_{m}-t |\left \{\mathbf{1}_{\{|s| \geq \, 2|x| \tilde{C}_{m,n,k} \}}+\Phi {\left (\xi \right ) }' \left[\mathbf{1}_{\{|t| \geq \, C_0 |x| \tilde{C}_{m,n,k} \}}+\mathbf{1}_{\{|s| \lt 1+ \frac{1}{4 }|x| \}}\mathbf{1}_{\{|t| \leq \, C_0 |x| \tilde{C}_{m,n,k} \}} \right]\right \}\\ && \ \ \ \ \ \leq |\alpha_{m}-t|\left[\mathbf{1}_{\{|s| \geq \, 2|x| \tilde{C}_{m,n,k}\}}+\mathbf{1}_{\{|t| \geq \, C_0 |x| \tilde{C}_{m,n,k}\}}+C\exp\bigg\{-\frac{x^2 }{2\big( 1+ \frac{C}{\sqrt{m}} |x| \big)}\bigg\}\right]. \end{eqnarray*}

Therefore

(5.71)\begin{eqnarray} J_{1}&\leqslant& J_{11}+ J_{12}+ J_{13}, \end{eqnarray}

where

\begin{align*} J_{11}&= C\int\!\!\int |\alpha_{m}-t |\exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} v_{k}(d s, d t), \nonumber \\ J_{12}&= \int\!\!\int |\alpha_{m}-t | \mathbf{1}_{\{|s| \geq \, 2 |x| \tilde{C}_{m,n,k} \}} v_{k}(d s, d t)\nonumber \end{align*}

and

\begin{eqnarray*} J_{13}= \int\!\!\int |\alpha_{m}-t | \mathbf{1}_{\{|t| \geq \, C_0 |x| \tilde{C}_{m,n,k} \}} v_{k}(d s, d t). \nonumber \end{eqnarray*}

By Lemma 4.3, we obtain

\begin{equation*} \mathbb{E}\left | H_{m,n,k} \right | \le \mathbb{E}\left | \frac{\log W_{1,k} }{nV_{m,n,\rho}} \right |+ \mathbb{E}\left | \frac{\log W_{2,k} }{nV_{m,n,\rho}} \right | \leq \frac{C}{\sqrt{m}}.\end{equation*}

For all $1\leq |x| \leq \sqrt{\log m},$

\begin{align*} J_{11}& \leq C_1 \bigg(\alpha_{m} + \mathbb{E}| H_{m,n,k}| \bigg)\exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}\nonumber \\ &\leq\frac{C_2}{\sqrt{m} } \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{align*}

For $J_{12},$ the following bound holds for $1\leq |x| \leq \sqrt{\log m},$

\begin{align*} J_{12}& \leq \alpha_{m} \mathbb{P}\bigg( |\tilde{T}_{m,n, k} | \geq 2 |x| \tilde{C}_{m,n,k} \bigg) + \mathbb{E}| H_{m,n,k}|\textbf{1}_{\{|\tilde{T}_{m,n, k} | \geq 2 |x| \tilde{C}_{m,n,k} \}} . \nonumber \end{align*}

By Bernstein’s inequality, for all $x \in \mathbb{R},$

\begin{align*} \mathbb{P}\bigg( |\tilde{T}_{m,n, k} | \geq 2 |x| \tilde{C}_{m,n,k}\bigg) &=\mathbb{P}\left( \frac{\tilde{T}_{m,n, k} }{\tilde{C}_{m,n,k} } \geq 2 |x| \right)+\mathbb{P}\left( \frac{\tilde{T}_{m,n, k} }{\tilde{C}_{m,n,k} } \leq - 2 |x|\right)\nonumber\\ &\leq 2 \exp\bigg\{ - \frac{(2x)^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} \nonumber \end{align*}

and by Cauchy–Schwarz inequality,

(5.72)\begin{align} \mathbb{E}| H_{m,n,k}|\textbf{1}_{\{|\tilde{T}_{m,n, k} | \geq 2 |x| \tilde{C}_{m,n,k} \}} & \leq \Big(\mathbb{E}| H_{m,n,k}|^2 \Big)^{1/2} \mathbb{P}\bigg( |\tilde{T}_{m,n, k} | \geq 2 |x| \tilde{C}_{m,n,k}\bigg) ^{1/2}\nonumber \\ & \leq \frac{C}{\sqrt{m} } \Bigg(2\exp\bigg\{ - \frac{(2x)^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} \Bigg)^{1/2} \nonumber \\ & \leq \frac{C_1}{\sqrt{m} } \exp\Bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \Bigg\}. \end{align}

Hence, for all $1\leq|x| \leq \sqrt{\ln m } ,$ we have

\begin{align*} J_{12} \leq \frac{C}{\sqrt{m}\ \ } \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} . \end{align*}

For J 13, the following inequality holds for $1 \leq |x| \leq \sqrt{\log m},$

\begin{align*} J_{13} \leq \alpha_{m} \mathbb{P}\bigg( |H_{m,n,k}| \geq C_0 |x| \tilde{C}_{m,n,k} \bigg) +\mathbb{E} \Big[| H_{m,n,k}| \textbf{1}_{\{|H_{m,n,k}| \geq C_0 |x| \widetilde{C}_{m,n,k} \} } \Big]. \nonumber \end{align*}

Note that $V_{m,n,\rho}\asymp \frac{1}{\sqrt{m} }$ and $\tilde{C}_{m,n,k} \asymp \frac{1}{m^{1/4} }$. It is evident that for all $1\leq |x| \leq \sqrt{\ln m},$

\begin{align*} \mathbb{P}\bigg( |H_{m,n,k} | \geq C_0 |x| \tilde{C}_{m,n,k} \bigg) & \leq \mathbb{P}\bigg( \Big|\frac{\log W_{1, k}}{n\, V_{m,n,\rho} \ } \Big|\geq \frac12 C_0 |x| \tilde{C}_{m,n,k} \bigg) \ + \ \mathbb{P}\bigg( \Big| \frac{\log W_{2,k}}{m\, V_{m,n,\rho} \ } \Big| \geq \frac12 C_0 |x| \tilde{C}_{m,n,k} \bigg) \nonumber \\ &= : T_1 + T_2 \nonumber . \end{align*}

By Lemma 5.1 and Markov’s inequality, for $1\leq |x| \leq \sqrt{\log m},$

\begin{align*} T_1 &\leq \mathbb{P}\bigg( W_{1, k}\geq \exp\{\frac12 C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k}\} \bigg) + \mathbb{P}\bigg( W_{1, k}^{-1}\geq \exp\{\frac12 C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k}\} \bigg) \nonumber \\ &\leq \mathbb{E}[W_{1,k} ] \exp\bigg\{-\frac12 C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\} + \mathbb{E}[W_{1,k}^{-\alpha} ] \exp\bigg\{-\frac12 \alpha C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\} \nonumber \\ & \leq C \exp\bigg\{- \frac12 x^2 \bigg\} , \nonumber \end{align*}

for C 0 sufficiently large.

Similarly, we have $T_2\leq C \exp \{-\frac12 x^2 \}.$ Thus, for $1\leq |x| \leq \sqrt{\log m},$

\begin{eqnarray*} \mathbb{P}\bigg( |H_{m,n,k} | \geq C_0 |x| \tilde{C}_{m,n,k} \bigg) \leq C \exp\bigg\{- \frac12 x^2 \bigg\} . \nonumber \end{eqnarray*}

By Lemma 5.1 and the inequality $|\log x|^2 \leq C_\alpha (x+ x^{-\alpha})$ for all $\alpha, x \gt 0$, we observe that

\begin{equation*}\mathbb{E}| H_{m,n,k}|^2 \leq \frac{C_1}{m}\Big(\mathbb{E} W_{1, k} +\mathbb{E} W_{1, k}^{-\alpha} + \mathbb{E} W_{2, k}+\mathbb{E} W_{2, k}^{-\alpha} \Big) \leq \frac{C_2}{m}.\end{equation*}

By Markov’s inequality and Cauchy–Schwarz inequality, for all $1\leq |x| \leq \sqrt{\log m},$ we have

(5.73)\begin{align} \mathbb{E}\Big[| H_{m,n,k}| \textbf{1}_{\{|H_{m,n,k}| \geq C_0 | x| \tilde{C}_{m,n,k} \} }\Big] & \leq \exp\bigg\{-\frac 1 4 C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\}\mathbb{E}[W_{1, k}^{1/2} | H_{m,n,k}|] \nonumber \\ &\quad +\exp\bigg\{- \frac 1 4 C_0 |x| m\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\}\mathbb{E}[W_{2, k}^{1/2} | H_{m,n,k}|] \nonumber \\ &\quad+ \ \exp\bigg\{ - \frac 1 4 \alpha C_0 |x| n\, V_{m,n,\rho}\tilde{C}_{m,n,k} \bigg\}\mathbb{E}[W_{1,k}^{-\alpha/2} | H_{m,n,k}| ] \nonumber \\ &\quad+ \ \exp\bigg\{ - \frac 1 4 \alpha C_0 |x| m\, V_{m,n,\rho}\tilde{C}_{m,n,k} \bigg\}\mathbb{E}[W_{2,k}^{-\alpha/2} | H_{m,n,k}| ] \nonumber \\ & \leq \exp\bigg\{- \frac 1 4 C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\} ( \mathbb{E}W_{1, k})^{1/2} ( \mathbb{E}| H_{m,n,k}|^2 )^{1/2} \nonumber \\ &\quad + \exp\bigg\{- \frac 1 4 C_0 |x| m \, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\} ( \mathbb{E}W_{2, k})^{1/2} ( \mathbb{E}| H_{m,n,k}|^2 )^{1/2} \nonumber \\ & \quad + \exp\bigg\{- \frac 1 4 \alpha C_0 |x| n\, V_{m,n,\rho} \tilde{C}_{m,n,k} \bigg\} ( \mathbb{E}W_{1, k}^{-\alpha })^{1/2} ( \mathbb{E}| H_{m,n,k}|^2 )^{1/2} \nonumber \\ &\quad + \exp\bigg\{ - \frac 1 4 \alpha C_0 |x| m\, V_{m,n,\rho}\tilde{C}_{m,n,k} \bigg\} \ (\mathbb{E} W_{2,k}^{-\alpha } )^{1/2}( \mathbb{E}| H_{m,n,k}|^2 )^{1/2}\nonumber \\ & \leq \frac{C}{\sqrt{m} } \exp\bigg\{- \frac12 x^2 \bigg\} , \end{align}

for C 0 sufficiently large. Thus, we have for all $1\leq |x| \leq \sqrt{\log m},$

(5.74)\begin{align} J_{13} \leq \frac{C}{\sqrt{m} } \exp\Big\{- \frac12 x^2 \Big\} . \end{align}

From (5.71), for all $1 \leq |x| \leq \sqrt{\log m},$ we have

(5.75)\begin{eqnarray} J_{1} \leq \frac{C}{\sqrt{m} }\exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} . \end{eqnarray}

For J 2, we arrive at the result that holds for all $1 \leq |x| \leq \sqrt{\ln m},$

(5.76)\begin{align} J_{2} &\leq\frac{C_3}{\sqrt{m} }\left(\int_{|s|\leq |x|\tilde{C}_{m,n,k} } (1+x^2 ) \exp\bigg\{ - \frac{x^2 }{2 \big( 1+ \frac{C_4}{\sqrt{m}} |x | \big) } \bigg\} v_{k}(d s)+\int_{|s| \gt |x| \tilde{C}_{m,n,k} } (1 + x^2 )v_{k}(d s)\right) \nonumber \\ &\leqslant\frac{C_3}{\sqrt{m} } (1+x^2) \left[ \exp\bigg\{ - \frac{x^2 }{2 \big( 1+ \frac{C_4}{\sqrt{m}} |x | \big) } \bigg\}+\mathbb{P}\bigg( \Big| \frac{\tilde{T}_{m,n,k} }{\tilde{C}_{m,n,k} } \Big| \gt |x| \bigg)\right]\nonumber \\ &\leqslant\frac{C }{\sqrt{m} } (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{align}

By a similar argument as in the previous cases, for J 3, for all $1 \leq |x| \leq \sqrt{\log m},$ we have

(5.77)\begin{align} J_{3} &\leq\frac{C_2}{\sqrt{m} }\left(\int\!\!\int (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 \big(1+ \frac{C}{\sqrt{m}} |x| \big)} \bigg\} v_{k}(d s, d t)+\int\!\!\int_{|s| \gt |x|\tilde{C}_{m,n,k}} v_{k}(d s, d t) \right. \nonumber \\ & \quad + \left. \int\!\!\int_{|t| \gt C_0 |x| \tilde{C}_{m,n,k}} v_{k}(d s, d t) \right) \nonumber \\ &\leq \frac{C_2}{\sqrt{m} }\left( (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 \big(1+ \frac{C}{\sqrt{m}} |x| \big)} \bigg\}+\mathbb{P}\bigg( \Big| \frac{\tilde{T}_{m,n,k} }{\tilde{C}_{m,n,k} } \Big| \gt |x|\bigg)+ \mathbb{P}\bigg( |H_{m,n,k}| \gt C_0 |x| \tilde{C}_{m,n,k} \bigg) \right)\nonumber \\ &\leq \frac{C_4}{\sqrt{m} } (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\}. \end{align}

Substituting (5.75)–(5.77) into (5.70), we get for all $1\leq |x| \leq \sqrt{\log m},$

(5.78)\begin{align} \mathbb{P}\Big(T_{m,n,0}+H_{m,n,k}\leq x+ \alpha_{m}, T_{m,n,0}\geq x\Big) &\leq\frac{C_1}{\sqrt{m} } (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 \big( 1+ \frac{C}{\sqrt{m}} |x| \big) } \bigg\} \nonumber \\ &\leq\frac{C}{\sqrt{m} } (1+x^2) \exp\bigg\{ - \frac{x^2 }{ 2 }\bigg\}. \end{align}

Following the method of (4.57), the second term on the RHS of (4.40) satisfies for $1 \leq |x| \leq \sqrt{\log m},$

(5.79)\begin{align} \mathbb{P}\left(\left|D_{m, n, k}\right| \gt \alpha_{m}\right) &\leq m \Bigg( \mathbb{E} \left|\frac{\log W_{1, n}}{n}-\frac{\log W_{1, \infty}}{n}\right| + \mathbb{E} \left|\frac{\log W_{2,n}}{m}-\frac{\log W_{2,\infty}}{m}\right| \nonumber\\ &\quad + \mathbb{E} \left|\frac{\log W_{1, k}}{n}-\frac{\log W_{1, \infty}}{n}\right| + \mathbb{E} \left|\frac{\log W_{2,k}}{m}-\frac{\log W_{2, \infty}}{m}\right| \Bigg) \nonumber\\ & \leq C \frac{x^2 }{\sqrt{m } \ } \exp\Big\{- \frac12 x^2 \Big\}. \end{align}

Combining (4.40), (5.78) and (5.79), we conclude that (5.66) holds for $1\leq |x| \leq \sqrt{\log m}.$ This completes the proof of Lemma 5.2. $\hfill\square$

Proof of Theorem 2.3

We present a proof of Theorem 2.3 for the case of $\frac{\mathbb{P} ( R_{m,n} \geq x )}{1-\Phi(x)},$ $ x \geq 0.$ The case $\frac{\mathbb{P} ( -R_{m,n} \geq x )}{\Phi(-x)}$ can be dealt with similarly due to the symmetry between m and n. We prove Lemmas 5.3 and 5.4, then combine them to establish Theorem 2.3. To avoid trivial cases, we assume that $m \wedge n \geq 2$.

The next following gives an upper bound for Theorem 2.3.

Lemma 5.3. Assume A3, A4, and A5 hold. Then, for all $0 \leq x \leq c \, \sqrt{m \wedge n} ,$ we have

(5.80)\begin{equation} \log \frac{\mathbb{P}\big( R_{m,n} \geq x \big)}{1-\Phi(x)} \leq C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ } . \end{equation}

Proof. We will begin by examining the situation when $0 \leq x \leq \sqrt{\log(m \wedge n)}$. Notice that

(5.81)\begin{align} \mathbb{P}\Big(R_{m,n}\geq x\Big) &= \mathbb{P}\Big(R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \geq x\Big) + \mathbb{P}\Big( R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \lt x\Big) \nonumber\\ & \leq \mathbb{P}\Big( \sum_{i=1}^{n+m} \eta_{m,n, i} \geq x\Big) +\ \mathbb{P}\Big(R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \lt x\Big). \end{align}

For the first term of (5.81), we can apply Cramér’s moderate deviations for independent random variables (refer to inequality (1) in [Reference Fan, Grama and Liu5]) to the last equality. We obtain for all $0 \leq x \leq c \sqrt{m \wedge n },$

\begin{align*} \mathbb{P}\Big( \sum_{i=1}^{n+m} \eta_{m,n, i} \geq x\Big) \leq \Big(1- \Phi (x) \Big)\Big(1+ C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }\Big). \nonumber \end{align*}

Applying Lemma 5.2 and inequality (4.37), we can obtain the following for the second term when $0 \leq x \leq \sqrt{\log(m \wedge n)}$

\begin{align*} \mathbb{P}\Big( R_{m,n} \geq x, \sum_{i=1}^{n+m} \eta_{m,n, i} \lt x\Big) \leq \Big(1- \Phi (x) \Big)\Big(1+ C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }\Big). \nonumber \end{align*}

Since $1+x \leq e^x,$ the above inequalities imply

\begin{eqnarray*} \mathbb{P}\Big(R_{m,n}\geq x\Big) \leq \Big(1- \Phi (x) \Big)\Big(1+ C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }\Big) \leq \Big(1- \Phi (x) \Big) \exp\left \{C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ } \right \} . \nonumber \end{eqnarray*}

Thus, we obtain (5.80) holds for all $0 \leq x \leq \sqrt{\ln(m \wedge n)} $.

Next, we consider the case $\sqrt{\log(m \wedge n)} \leq x \leq c\, \sqrt{m\wedge n}$. Clearly, it holds for all $x \in \mathbb{R},$

(5.82)\begin{eqnarray} \mathbb{P}\bigg( R_{m,n} \geq x \bigg) \leq I_1+I_2+I_3, \end{eqnarray}

where

\begin{align*} I_1 &= \mathbb{P}\Bigg( \sum_{i=1}^{n+m} \eta_{m,n, i} \geq x\bigg(1- \frac{(\frac1 n +\frac 1{m \alpha}) x }{\, V_{m,n,\rho}} \bigg) \Bigg ), \\ I_2&= \mathbb{P}\Bigg( \frac{\log W_{1, n}}{ n \, V_{m,n,\rho} } \geq \frac{ x^{2}}{n\, V_{m,n,\rho} } \Bigg ) \ \ \ \ \ \ \ \ \textrm{and} \ \ \ \ \ \ I_3\ =\ \mathbb{P}\Bigg( -\frac{\log W_{2,m}}{m\, V_{m,n,\rho} } \geq \frac{ x^{2}}{m \alpha \, V_{m,n,\rho} } \Bigg ) \end{align*}

where α given by Lemma 5.1.

Now, let us provide estimations for $I_1, I_2,$ and I 3. Condition A4 implies that $\sum_{i=1}^{n+m} \eta_{m,n,i}$ is a sum of independent random variables with finite moment generating functions. Using Cramér’s moderate deviations for independent random variables (cf. [Reference Fan, Grama and Liu5]), we can deduce the following for all $1\leq x \leq c \sqrt{m\wedge n},$

\begin{align*} I_1 &\leq \bigg(1- \Phi\Big(x(1- \frac{(\frac1 n +\frac 1{m \alpha})x }{\, V_{m,n,\rho}} ) \Big) \bigg )\exp\bigg\{\frac{ C}{\sqrt{m+n} }\Big(x(1- \frac{(\frac1 n +\frac 1{m \alpha})x }{\, V_{m,n,\rho}} ) \Big)^{3} \bigg\} \\ & \leq \bigg(1- \Phi\Big( x(1- \frac{(\frac1 n +\frac 1{m \alpha})x }{\, V_{m,n,\rho}} )\Big) \bigg )\exp\bigg\{C\frac{x^{3}}{\sqrt{m \wedge n} } \bigg\}. \end{align*}

By inequality (4.37), for all $x\geq 1$ and $\varepsilon_n \in (0, \frac{1}{2}]$, we have

(5.83)\begin{align} \frac{1-\Phi \left( x (1- \varepsilon_n) \right)}{1-\Phi \left( x\right) } &\leq 1+ \frac{\frac{1}{\sqrt{2\pi}} e^{-x^2(1- \varepsilon_n)^2/2} x \varepsilon_n }{\frac{1}{\sqrt{2 \pi} (1+x)} e^{-x^2/2} } \nonumber \\ &\leq \exp\Big\{2Cx^2 \varepsilon_n \Big\}. \end{align}

Since $V_{m,n,\rho}\asymp \frac{1}{\sqrt{m}}$, we have

\begin{equation*} \frac{(\frac1 n +\frac 1{m \alpha})x }{\, V_{m,n,\rho} } \asymp Cxm^{-\frac{1}{2} } . \end{equation*}

It holds for all $1 \leq x \leq c\, \sqrt{m \wedge n} ,$

(5.84)\begin{align} I_1 &\leq \Big(1- \Phi(x ) \Big ) \exp\Big\{2Cx^2 \varepsilon_n \Big\} \exp\Big\{C \frac{x^{3} }{\sqrt{m \wedge n} }\Big\} \nonumber \\ &\leq \Big(1- \Phi(x ) \Big )\exp\Big\{C_{1} \frac{x^{3} }{\sqrt{m \wedge n} }\Big\} . \end{align}

By Markov’s inequality and (3.18), it is easy to see that for all $x \geq \sqrt{\log (m \wedge n)} ,$

(5.85)\begin{align} I_2 &\leq C\exp\big\{-x^2 \big \} \leq C \frac{1 }{\sqrt{m \wedge n} } \exp\left \{-\frac{1}{2}x^{2} \right \} \nonumber \\ &\leq C \frac{1+x }{\sqrt{m \wedge n} } \Big(1-\Phi(x)\Big) \end{align}

and

(5.86)\begin{align} I_3 &\leq \exp\big\{- x^2 \big \} \mathbb{E} W_{2,m}^{-\alpha} \leq C \exp\big\{-x^2 \big \} \nonumber \\ &\leq C \frac{1+x }{\sqrt{m \wedge n} } \Big(1-\Phi(x)\Big). \end{align}

Combining (5.84)–(5.86), we obtain for all $\sqrt{\log (m \wedge n) }\leq x \leq c\, \sqrt{m \wedge n} ,$

\begin{align*} \mathbb{P}\bigg( R_{m,n} \geq x \bigg ) \leq \Big(1- \Phi(x ) \Big )\exp\Big\{C_3 \frac{x^{3} }{\sqrt{m \wedge n} }\Big\}, \end{align*}

which implies the desired inequality for all $\sqrt{\log (m \wedge n) }\leq x \leq c\, \sqrt{m \wedge n} .$ $\hfill\square$

The following lemma establishes the lower bound in Theorem 2.3.

Lemma 5.4. Assume that conditions A3, A4, and A5 are satisfied. Then for all $0 \leq x \leq c \, \sqrt{m \wedge n} ,$

(5.87)\begin{equation} \log \frac{\mathbb{P}\big( R_{m,n} \geq x \big)}{1-\Phi(x)} \geq - C \frac{1+x^3 }{\sqrt{m \wedge n }}. \end{equation}

Proof. The lower bound can be established following a similar approach to the upper bound. For example, to establish (5.87) for all $\sqrt{\log(m \wedge n)} \leq x \leq c\sqrt{m \wedge n},$ we can observe that

\begin{eqnarray*} \mathbb{P}\bigg( R_{m,n} \geq x \bigg) & \geq&I_4-I_5-I_6, \nonumber \end{eqnarray*}

where

\begin{align*} I_4 &= \mathbb{P}\Bigg( \sum_{i=1}^{n+m} \eta_{m,n, i} \geq x\bigg(1+ \frac{(\frac1 {n\alpha} +\frac 1{m}) x }{\,V_{m,n,\rho}} \bigg) \Bigg ), \\ I_5&= \mathbb{P}\Bigg( - \frac{\log W_{1, n}}{ n \,V_{m,n,\rho} } \geq \frac{ x^{2}}{n\alpha \,V_{m,n,\rho} } \Bigg ) \ \ \ \ \ \ \ \ \textrm{and} \ \ \ I_6 = \mathbb{P}\Bigg( \frac{\log W_{2,m}}{m \,V_{m,n,\rho} } \geq \frac{ x^{2}}{m \,V_{m,n,\rho} } \Bigg ) \end{align*}

where α given by Lemma 5.1. The remainder of the proof parallels the argument in Lemma 5.3.

Acknowledgements

The authors are grateful to anonymous referees and Professor Quansheng Liu for their very valuable comments and remarks, which significantly contributed to improving the quality of the paper.

Funding statement

This work was supported by the National Natural Science Foundation of China (Grant no. 12271062).

References

Bansaye, V. (2009). Cell contamination and branching processes in a random environment with immigration. Advances in Applied Probability 41(4): 10591081.10.1239/aap/1261669586CrossRefGoogle Scholar
Bikelis, A. (1966). On estimates of the remainder term in the central limit theorem. Lithuanian Mathematical Journal 6(3): 323346.10.15388/LMJ.1966.19732CrossRefGoogle Scholar
Chang, J.Y., Shao, Q.M., & Zhou, W.X. (2016). Cramér-type moderate deviations for Studentized two-sample U-statistics with applications. The Annals of Statistics 44(5): 19311956.10.1214/15-AOS1375CrossRefGoogle Scholar
Chen, L.H.Y., & Shao, Q.M. (2001). A non-uniform Berry-Esseen bound via Stein’s method. Probability Theory and Related Fields 120(2): 236254.10.1007/PL00008782CrossRefGoogle Scholar
Fan, X.Q., Grama, I, & Liu, Q.S. (2013). Cramér large deviation expansions for martingales under Bernstein’s condition. Stochastic Processes and Their Applications 123(11): 39193942.10.1016/j.spa.2013.06.010CrossRefGoogle Scholar
Fan, X.Q., Hu, H.J., Wu, H.Y., & Ye, Y.N. (2022). Comparison on the criticality parameters for two supercritical branching processes in random environments. (arXiv:2205.09551)Google Scholar
Grama, I, Liu, Q.S., & Miqueu, E. (2017). Berry-Esseen’s bound and Cramér’s large deviation expansion for a supercritical branching process in a random environment. Stochastic Processes and Their Applications 127(4): 12551281.10.1016/j.spa.2016.07.014CrossRefGoogle Scholar
Grama, I, Liu, Q.S., & Miqueu, E. (2023). Asymptotics of the distribution and harmonic moments for a supercritical branching process in a random environment. Ann. Inst. H. Poincaré Probab. Statist. 59(4): 19341950.10.1214/22-AIHP1318CrossRefGoogle Scholar
Huang, C.M., & Liu, Q.S. (2012). Moments, moderate and large deviations for a branching process in a random environment. Stochastic Processes and Their Applications 122(2): 522545.10.1016/j.spa.2011.09.001CrossRefGoogle Scholar
Huang, C.M., Wang, C., & Wang, X.Q. (2022). Moments and large deviations for supercritical branching processes with immigration in random environments. Acta Mathematica Scientia 42(1): 4972.10.1007/s10473-022-0102-3CrossRefGoogle Scholar
Huang, C.M., Wang, C., & Wang, X.Q. (2022). Moments and asymptotic properties for supercritical branching processes with immigration in random environments. Stochastic Models 39(1): 2140.10.1080/15326349.2022.2040365CrossRefGoogle Scholar
Huang, X.L., Li, Y.Q., & Xiang, K.N. (2022). Berry-Esseen bound for a supercritical branching processes with immigration in a random environment. Statistics & Probability Letters 190: .10.1016/j.spl.2022.109619CrossRefGoogle Scholar
Kesten, H., Kozlov, M.V., & Spitzer, F. (1975). A limit law for random walk in a random environment. Compositio Mathematica 30(2): 145168.Google Scholar
Key, E.S. (1987). Limiting distributions and regeneration times for multitype branching processes with immigration in a random environment. The Annals of Probability 15(1): 344353.10.1214/aop/1176992273CrossRefGoogle Scholar
Li, Y.Q., Hu, Y.L., & Liu, Q.S. (2011). Weighted moments for a supercritical branching process in a varying or random environment. Science China Mathematics 54(7): 14371444.10.1007/s11425-011-4220-yCrossRefGoogle Scholar
Li, Y.Q., & Huang, X.L. (2022). A.S. convergence rate for a supercritical branching processes with immigration in a random environment. Communication in Statistics-Theory and Methods 51(3): 826839.10.1080/03610926.2020.1756330CrossRefGoogle Scholar
Li, Y.Q., Huang, X.L., & Peng, Z.H. (2022). Central limit theorem and convergence rates for a supercritical branching process with immigration in a random environment. Acta Mathematica Scientia 42(3): 957974.10.1007/s10473-022-0309-3CrossRefGoogle Scholar
Li, Y.Q., Liu, Q.S., Gao, Z.Q., & Wang, H.S. (2014). Asymptotic properties of supercritical branching processes in random environments. Frontiers of Mathematics in China 9(4): 737751.10.1007/s11464-014-0397-zCrossRefGoogle Scholar
Li, Y.Q., Tang, X.P. & Wang, H.S. (2023). Exact convergence rate in central limit theorem for a supercritical branching process with immigration in a random environment. Communications in Statistics-Theory and Methods 53(23): 8412842710.1080/03610926.2023.2288792CrossRefGoogle Scholar
Liu, Q. (1999). Asymptotic properties of supercritical age-dependent branching processes and homogeneous branching random walks. Stochastic Processes and Their Applications 82(1): 6187.10.1016/S0304-4149(99)00008-3CrossRefGoogle Scholar
Roitershtein, A. (2007). A note on multitype branching processes with immigration in a random environment. The Annals of Probability 35(4): 15731592.10.1214/009117906000001015CrossRefGoogle Scholar
Vatutin, V.A. (2011). Multitype branching processes with immigration in random environment, and polling systems. Siberian Advances in Mathematics 21(1): 4272.10.3103/S1055134411010020CrossRefGoogle Scholar
Wang, H.S., Gao, Z.Q., & Liu, Q.S. (2011). Central limit theorems for a supercritical branching process in a random environment. Statistics & Probability Letters 81(5): 53954710.1016/j.spl.2011.01.003CrossRefGoogle Scholar
Wang, Y.J., Li, Y.Q., Liu, Q.S. & Liu, Z.M. (2019). Quenched weighted moments of a supercritical branching process in a random environment. Asian Journal of Mathematics 23(6): 969984.10.4310/AJM.2019.v23.n6.a5CrossRefGoogle Scholar
Wang, Y.J., Liu, Z.M., Li, Y.Q., & Liu, Q.S. (2017). On the concept of subcriticality and criticality and a ratio theorem for a branching process in a random environment. Statistics & Probability Letters 127: 97103.10.1016/j.spl.2017.02.023CrossRefGoogle Scholar
Wang, Y.Q., & Liu, Q.S. (2017). Limit theorems for a supercritical branching process with immigration in a random environment. Science China Mathematics 60(12): 24812502.10.1007/s11425-016-9017-7CrossRefGoogle Scholar
Wang, Y.Q., & Liu, Q.S. (2021). Berry-Esseen’s bound for a supercritical branching process with immigration in a random environment (in Chinese). Scientia Sinica Mathematica 51(5): 751762.Google Scholar
Wang, Y.Q., & Liu, Q.S. (2022). Asymptotic properties of a supercritical branching process with immigration in a random environment. Stochastics and Quality Control 36(2): 145155.10.1515/eqc-2021-0030CrossRefGoogle Scholar
Wang, Y.Q., Liu, Q.S., & Fan, X.Q. (2022). Cramér’s large deviation expansion for a supercritical branching process with immigration in a random environment. Acta Mathematica Sinica, Chinese Series 65(5): 877890.Google Scholar
Figure 0

Figure 1. Central limit theorem.

Figure 1

Figure 2. Non-uniform Berry–Esseen bounds.

Figure 2

Figure 3. Cramér’s moderate deviations.