1. Introduction
As a significant extension of the branching process in a random environment (see [Reference Grama, Liu and Miqueu7, Reference Grama, Liu and Miqueu8, Reference Li, Hu and Liu15, Reference Li, Liu, Gao and Wang18, Reference Wang, Li, Liu and Liu24, Reference Wang, Liu, Li and Liu25] and their references), the branching process with immigration in a random environment (BPIRE) has received extensive attention, Bansaye [Reference Bansaye1] investigated BPIRE by studying a model of cell contamination. Kesten [Reference Kesten, Kozlov and Spitzer13] obtained the limiting distribution of random walks in random environments by using branching processes with one immigration at each generation in an i.i.d. environment. Wang and Liu have obtained the almost surely convergence, Lp convergence, the conditional moments, the quenched moments, the harmonic moments, the exponential decay rate, and the Lp convergence rate under the annealed law about
$(W_n);$ and the nondegeneracy, the existence of the p-th moments and the harmonic moments for its limit
$W;$ central limit theorem (CLT), the large and moderate deviation principles, and the Berry–Esseen bound for
$\log Z_{n}$ [Reference Wang and Liu26–Reference Wang and Liu28]. Wang et al. provide the Cramér’s large deviation expansion for
$\log Z_{n}$ [Reference Wang, Liu and Fan29]. Li and Huang [Reference Li and Huang16] investigated a polynomial convergence rate of the submartingale to its limit on BPIRE, and the almost surely convergence rate for a submartingale associated with branching process in a varying environment. In [Reference Li, Huang and Peng17], Li et al. considered the convergence rate in probability or distribution, and two forms of the CLTs about
$(W_n).$ Huang et al. [Reference Huang, Li and Xiang12] considered the rate of convergence of the CLT under a moment condition of order
$2+\delta$, with fixed
$\delta \in(0,1]$. Huang et al. [Reference Huang, Wang and Wang10, Reference Huang, Wang and Wang11] showed the moments and the harmonic moments of Zn, the large deviation principle and large deviations for
$\log Z_{n},$ and described the decay rates of n-step transition probabilities. For the subcritical and critical cases (with multi-type), Key [Reference Key14] demonstrated the convergence to a limit distribution. Roitershtein [Reference Roitershtein21] investigated CLTs and strong laws of large numbers for the partial sums of this process. Additionally, Vatutin [Reference Vatutin22] applied a multi-type BPIRE to study polling systems with random service regimes.
Despite these contributions, there is no result for comparison on the criticality parameters for two supercritical BPIRE, which hinders their practical application. The objective of the paper is to fill this gap.
Let
$(\xi_1, \xi_2)^T =( (\xi_{1,n}, \xi_{2,n})^T)_{n\geq 0}$ be a sequence of i.i.d. two-dimensional random vectors, where T is the transport operator and
$ ( \xi_{1,n}, \xi_{2,n})^T \in \mathbb{R}^2$ stands for the random environment at generation n. Thus,
$( \xi_{1,n}, \xi_{2,n})_{n\geq 0}^T$ are independent random vectors, but notice that for given n,
$\xi_{1,n}$ and
$\xi_{2,n}$ may not be independent. For any
$n\in \mathbb N$ and
$i=1,2$, each realization of
$\xi_{i,n}$ corresponds to two probability distributions on
$\mathbb{N}=\{0,1,2,\cdots\}$: one is the offspring distribution denoted by

the other is the distribution of the number of immigrants denoted by

Let
$\{Z_{1,n}, n\geq 0\}$ and
$\{Z_{2,n}, n\geq 0\}$ be two branching processes with immigration in the random environments
$\xi_{1,n}$ and
$\xi_{2,n}$, respectively. Then,
$\{Z_{1,n}, n\geq 0\}$ and
$\{Z_{2,n}, n\geq 0\}$ can be described as follows: for
$n \geq 0,$

where
$X_{1,n,i}$ and
$X_{2,n,i}$ are the number of offspring of the i-th individual in generation n with environments
$\xi_{1,n}$ and
$\xi_{2,n}$, respectively.
$ Y_{1,n} $ and
$ Y_{2,n} $ are the number of new immigrants in the n-th generation with environments
$\xi_{1,n}$ and
$\xi_{2,n}$. Given
$(\xi_{1,n},\, \xi_{2,n})^T$, the random variables
$\{X_{1, n,i},X_{2, n,i},i\geq 1\}$ and
$\{Y_{1,n},Y_{2,n}\}$ are mutually independent.
Let
$(\Gamma, \mathbb P_{\xi})$ be the probability space under which the process is defined when the environment ξ is given. The total probability space can be formulated as the product space
$(\Gamma\times \Theta^{\mathbb N}, \mathbb P)$, with
$\mathbb P(dx, d\xi) =\mathbb P_{\xi}(dx)\tau(d\xi).$ Usually, the conditional probabilities
$\mathbb{P}_{\xi_1}$ and
$\mathbb{P}_{\xi_2}$ are called the quenched laws, while the total probability
$\mathbb{P}$ is called the annealed law. We further define two laws
$\mathbb P_{\xi_i,Y_i}$,
$i=1,2$, which denote the conditional probabilities of
$\mathbb P$ given
$(\xi_i, Y_i)$ where
$Y_i=(Y_{i,0},Y_{i,1},...), i=1,2$. Additionally, we denote
$\mathbb{P}_{\xi_1 ,\xi_2}$ may be considered to be the conditional probability when the environment
$(\xi_1,\, \xi_2)^T$ is given, and τ is the joint law of the environment
$(\xi_1,\, \xi_2)^T$. Then,

is the joint law of the two branching processes in random environment. In the sequel, the expectation with respect to
$\mathbb P_{\xi_1,\xi_2}$
$(\text{resp. }\mathbb P_{\xi_i,Y_i}, \mathbb P_{\xi},\mathbb P)$ will be denoted by
$\mathbb E_{\xi_1,\xi_2}$
$(\text{resp. }\mathbb E_{\xi_i,Y_i}, \mathbb E_{\xi},\mathbb E)$.
We define, for any
$ n\ge 0 $ and
$ a\ge 0 $,


with the convention that
$ \Pi_{1,0} = \Pi_{2,0} = 1$. Moreover

Clearly,
$(m_{1,n}^{(p)})_{n\geq 0}$ and
$(m_{2,n}^{(p)})_{n\geq 0}$ are two sequences of i.i.d. random variables and we denote

where µ 1 and µ 2 are known as the criticality parameters for BPIREs
$\{Z_{1,n}, n\geq 0\}$ and
$\{Z_{2,n}, n\geq 0\}$, respectively. In particular, if ξ 1 and ξ 2 are independent, we have ρ = 0. To avoid the environments ξ 1 and ξ 2 are degenerate, we assume that
$ 0 \lt \sigma_1, \sigma_2 \lt \infty.$ For
$l=1,2,$ to establish some limit theorems on
$ Z_{l,n} $ and the fundamental submartingale, we shall use the decomposition of
$ Z_{l,n}$, similar to the approach used in [Reference Wang and Liu26].
For simplicity, we will primarily concentrate on the case of
$ Z_{1,n}. $ To include the immigrants in the family tree, we introduce one particle at each time n which we call eternal particle, denoted by
$0_{0},0_{1},0_{2},\cdots$ with
$0_{n}:=0_{n-1}0$ (the juxtaposition of 00 with n times 0), and we consider that the
$Y_{1,n}$ immigrants are direct children of 0n. To form a tree, we also consider that each
$0_{n+1}$ is a direct child of 0n. Let
$ E=\left \{0_{k} : k \gt 0 \right \} $ represent the set of all virtual particles introduced and assume that the
$ Y_{1,n} $ particles moved into the n + 1 generation are the descendants of the virtual particle 0n introduced in the n-th generation. To construct a complete family tree, we assume that the virtual particle
$ 0_{n+1} $ introduced in the n + 1 generation is also the offspring of the virtual particle 0n introduced in the n-th generation.
To enhance accessibility, we use “∼” to represent the pedigree with the initial particle ϕ without the immigrating particle, and “
$ \wedge $” to represent the pedigree with the initial particle 00 including the immigrated particle. Therefore, the
$ \tilde{Z} _{1,n} $ represents the branching process in the random environment excluding the immigrating particles, the other
$ \hat {Z } _{1,n} $ represents the branching process including the immigrating particles in the random environment, then

Set

it is obvious that

The sequence
$ \tilde {W } _{1,n}^{\left ( \phi \right ) } $ is the well-known martingale associated with the branching process
$ \tilde{Z}_{1,n} $ (without immigration) in a random environment, and its asymptotic properties have been extensively studied. We will break down the branching processes with immigration, which begin with an eternal particle
$0_{n} \in E$, in terms of branching processes (without immigration) in random environment, we have

For the case of a single supercritical BPIRE, denoted by
$\{Z_{1,n}, n\geq 0\}$, the normal approximation has been extensively studied. Given the additional conditions
$ \mathbb{E}\left ( \frac{Z_{1,0} }{m_{1,0}} \right )^{p} \lt \infty $ and
$ \mathbb{E}\left ( \frac{Y_{1,0} }{m_{1,0}} \right ) ^{p} \lt \infty $ for a constant
$ p \gt 1,$ and
$ \mathbb{E} X_{1,0}^{2+\delta} \lt \infty $ for a constant
$\delta \in (0, 1]$. Wang et al. have derived the following Berry–Esseen bound for
$\log Z_{1,n}$ in their work [Reference Wang and Liu27]:

where
$\Phi(x)$ is the standard normal distribution function.
Assume Cramér’s condition
$ \mathbb{E}e^{\lambda _{0} X_{1,0} } \lt \infty $ for a constant
$ \lambda_{0} \gt 0,$ and
$\mathbb{E}\left (\frac{Z_{1,0}^{p}}{m_{1,0}}\right ) \lt \infty, \mathbb{E}\left (\frac{Y_{1,0}^{p}}{m_{1,0}}\right ) \lt \infty$ for a constant
$ p \gt 1,$ Wang et al. [Reference Wang, Liu and Fan29] also have established the following Cramér’s large deviation expansion: for
$0 \le x=o\left ( n \right ) ,n\to \infty $,

where C is positive constant. For instance, when the parameter σ 1 is known, they can be applied to construct confidence intervals for estimating the criticality parameter µ 1. This estimation is formulated by considering both the observation
$Z_{1,n}$ and the generation n, providing a more precise understanding of the process.
Although the limit theorems for a single supercritical BPIRE have been extensively studied, there currently exists no comparative result concerning the criticality parameters for two supercritical BPIREs. The objective of the paper is to fill this gap. We begin by considering the following common hypothesis testing:

When µ 1 and µ 2 represent the means of two independent populations, this form of hypothesis testing has been considered by Chang et al. [Reference Chang, Shao and Zhou3]. Within their work, they have established Cramér type moderate deviations. In this paper, we are interested in the case where µ 1 and µ 2 are two criticality parameters of BPIREs. By the law of large numbers,
$\frac{1}{n} \log Z_{1,n} \to \mu_1$ and
$\frac{1}{m} \log Z_{2,m} \to \mu_2$ converge in probability as
$m,n\to\infty$, respectively. Therefore, to test the hypothesis, it is essential to estimate the asymptotic distribution of the random variable
$\frac{1}{n} \log Z_{1,n} - \frac{1}{m} \log Z_{2,m} $, this estimation is central to the main purpose of this paper. Observe that the expression
$\frac{1}{n} \log Z_{1,n} - \frac{1}{m} \log Z_{2,m} $ has an asymptotic distribution equivalent to
$\frac{1}{n}\sum_{k=1 }^{n} X_{1,k} - \frac{1}{m}\sum_{k=1 }^{m} X_{2,k}$. When ξ 1 and ξ 2 are independent, both
$\sum_{k=1 }^{n} X_{1,k} $ and
$ \sum_{k=1 }^{m} X_{2,k} $ are sums of i.i.d. random variables.
In this paper, we always assume
$ l=1,2, $

which means that the process is supercritical. We assume that the following conditions hold:

write
$ \log ^+ x = \max\{\log x , 0 \}. $ From Grama et al. [Reference Grama, Liu and Miqueu7], it can be inferred that under the conditions (1.4) and (1.5),
$ {W} _{n} $ converges almost surely to a non-negative random variable
$W.$ Additionally, we assume the following condition:

which ensures that the random walk has positive increments and states that each individual has at least one offspring. Assumptions (1.5) and (1.6) imply that the processes
$(Z_{1,n}, n\geq 0)$ and
$(Z_{2,m}, m\geq 0)$ are both supercritical and satisfy
$\mu_1, \mu_2 \gt 0$ and
$Z_{1, n} \to \infty, Z_{2, m} \to \infty $.
Define

Throughout the paper, we assume either

The final condition guarantees that

is in order of
$\frac{1}{m \wedge n} $ as
$m,n \to \infty. $ Clearly, it is easy to see that if
$ m \le n ,$

We now introduce our main results. First, Theorem 2.1 presents the CLT for
$R_{m,n}:$ for all
$x \in \mathbb{R},$ it holds

Second, under some moment conditions, Theorem 2.2 gives a non-uniform Berry–Esseen bound for
$R_{m,n}$: for any
$\delta' \in (0, \delta)$ and all
$x \in \mathbb{R},$

According to Lemma 4.3 and (3.16) in the paper, under the given conditions, we conclude that
$R_{m,n}$ only has a finite moment of order
$1+\delta'$. This explains why the non-uniform Berry–Esseen bound exhibits an order of
$\displaystyle |x|^{-1-\delta'}$ as
$x \rightarrow \infty$, instead of an order of
$\displaystyle |x|^{-2-\delta}$. In particular, we have
$\frac{1}{m} \log Z_{2,m} \rightarrow \mu_2$ in probability when
$m \rightarrow \infty,$ which leads to
$R_{m,n} \rightarrow \frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}}$ in probability. Thus, inequality (1.8) implies that

which improves the Berry–Esseen bound (1.2) by adding a factor
$\frac{ 1 }{ 1+|x|^{1+\delta'} } .$
Third, we establish Cramér’s moderate deviations. Assuming conditions A3, A4, and A5 are satisfied, Theorem 2.3 demonstrates that for all
$0 \leq x \leq c^{-1} \sqrt{m \wedge n } $,

When
$m\rightarrow \infty,$ it is easy to see that (1.9) holds with
$R_{m,n}$ replaced by
$\frac{\log Z_{1,n} - n \mu_1 \ }{ \sigma_1 \sqrt{n}} $. Therefore, our results reconfirm the Cramér’s moderate deviations (1.3) as initially established by Wang et al. To conclude, we explore the creation of confidence intervals for
$\mu_1 - \mu_2$ as an application of our finding.
We now explain briefly the organization of this paper. In Section 2, we present our main results. Some applications are demonstrated in Section 3. The proofs of some results in Section 2 are given in Section 3.2.
Additionally, the symbols c and C are used to represent a small positive constant and a large positive constant, respectively. Their values may vary from line to line. For two sequences of positive numbers
$(a_{n} )_{n\ge 1}$ and
$(b_{n} )_{n\ge 1}$, we write
$a_n \asymp b_n$ if there exists a positive constant C such that for all n, it holds
$C^{-1}b_n \leq a_n \leq C b_n$.
2. Main results
To better study the properties, we make the following conditions:
- A1.
There exists a constant
$\delta \in (0, 1]$ such that
\begin{equation*} \mathbb{E} [ X_{1,0}^{2+\delta}+ X_{2,0}^{2+\delta} \,] \lt \infty. \end{equation*}
- A2.
There exists a constant p > 1 such that
\begin{equation*}\mathbb{E}\bigg[ \frac{Z_{1,1} ^{p}}{m_{1,0}^p} + \frac{Z_{2,1} ^{p}}{m_{2,0}^p} \bigg] \lt \infty .\end{equation*}
- A3.
There exists a constant p > 1 such that
\begin{equation*} \mathbb{E}\left ( \frac{Y_{1,0}^{p} }{m_{1,0}^{p} } + \frac{Y_{2,0} ^{p}}{m_{2,0}^{p} }\right ) \lt \infty. \end{equation*}
Theorem 2.1 For all
$x \in \mathbb{R},$ we have

The following theorem gives a non-uniform Berry–Esseen bound for
$R_{m,n}$.
Theorem 2.2 Assume that conditions A1, A2, and A3 hold. Let
$\delta'$ be a constant such that
$\delta' \in (0, \delta).$ Then for all
$x \in \mathbb{R},$

Under the conditions A1, A2, and A3, and during the proof of the theorem, it can be shown that
$R_{m,n}$ has a finite moment of order
$1+\delta'$. This explains why the non-uniform Berry–Esseen bound (2.2) decays at the rate
$|x|^{-1-\delta'}$ rather than
$|x|^{-2-\delta}$ as
$x \to \infty$. According to Theorem 2.2, we can establish the following Berry–Esseen bounds for
$R_{m,n}$.
Corollary 2.3. Assume that conditionsA1, A2, and A3 hold. Then

Note that
$\frac{1}{m} \log Z_{2,m}$ converges in probability to µ 2, thus,

in probability. Therefore, when
$m\rightarrow \infty,$ Corollary 2.3 yields the Berry–Esseen bound established by Huang and Liu [Reference Wang and Liu27], that is,

It known that the convergence rate of the last Berry–Esseen bound aligns with the best achievable rate for i.i.d. random variables with finite moments of order
$2+\delta$.
Next, we will establish Cramér’s moderate deviations for
$R_{m,n}$. To achieve this, we require the following conditions.
- A4.
The random variables
$X_{1,0} $ and
$X_{2,0} $ have exponential moments, i.e. there exists a constant
$\lambda_0 \gt 0 $
such that
\begin{equation*} \mathbb{E} \big[ e^{\lambda_0 X_{1,0} } + e^{\lambda_0 X_{2,0} } \big] \lt \infty. \end{equation*}
- A5.
There exists a constant p > 1 such that
\begin{equation*} \mathbb{E} \bigg[ \frac{Z_{1,1} ^{p}}{m_{1,0}} + \frac{Z_{2,1} ^{p}}{m_{2,0}} \bigg] \lt \infty. \end{equation*}
We have the following Cramér’s moderate deviations for
$R_{m,n}$.
Theorem 2.3 Assume that conditions A3, A4, and A5 hold. Then for all
$0 \leq x \leq c\, \sqrt{m \wedge n} ,$

By the symmetry between m and n, Theorems 2.1–2.3 hold true when
$R_{m,n}$ is replaced by
$-R_{m,n}$. Consequently, we have
$-R_{m,n} = R_{n,m}.$
By a similar argument to the proof of theorem 7.3 in [Reference Wang and Liu26], it becomes evident that Theorem 2.3 implies the subsequent moderate deviation principle (Markov decision process) result for
$R_{m,n}$.
Corollary 2.4. Assume that conditions A3, A4, and A5 hold. Let an be a sequence of positive numbers satisfying

Then, for any measurable subset B of
$\mathbb{R} $,

where Bo and
$\overline{B}$ denote the interior and the closure of B, respectively.
3. Applications and simulations
3.1. Applications to construction of confidence intervals
In this section, we focus on the construction of confidence intervals for
$\mu_1 - \mu_2$. When we have known of the parameters
$\sigma_1, \sigma_2, $ and ρ, we can use Theorems 2.2 and 2.3 to establish confidence intervals for
$\mu_1 - \mu_2$.
Proposition 3.1. Let
$ \kappa_{m,n} \in (0,1) $, Consider the following two groups of conditions:
- H1.
-
The conditions of Theorem 2.2 hold and
(3.6)\begin{eqnarray} \left|\log \kappa_{m,n}\right|=o\big(\log (m \wedge n) \big), \ \ \ \textrm{as}\ m \wedge n\rightarrow \infty . \end{eqnarray}
- H2.
-
The conditions of Theorem 2.3 hold and
(3.7)\begin{eqnarray} \left|\log \kappa_{m,n}\right|=o\big((m \wedge n) ^{1/3} \big), \ \ \ \textrm{as}\ m \wedge n\rightarrow\infty. \end{eqnarray}
Assume H1 or H2 holds, Then
$\left[A_{m,n},\, B_{m,n}\right] $, with

is a
$1-\kappa_{m,n}$ confidence interval for
$\mu_1-\mu_2$, when
$m \wedge n$ is sufficiently large.
Proof. Assume H1 holds. Theorem 2.2 implies that, as
$m\wedge n\rightarrow \infty$,

uniformly for
$ 0\leq x=o\left(\sqrt{\log (m\wedge n) }\right).$ For
$ p\searrow 0 $, the quantile function of the standard normal distribution has the following asymptotic expansion

Specifically, when
$\kappa_{m,n}$ satisfies (3.6), the upper
$\left(1-\frac{\kappa_{m,n}}{2} \right)$-th quantile of standard normal distribution satisfies

hence

which, by (3.6), is of order
$ o\left(\sqrt{\log(m\wedge n) }\right).$ Then applying the last equality to (3.8), we obtain the result

and

as
$ m\wedge n\rightarrow \infty $. Note that,
$R_{m, n}\leq\Phi^{-1}(1-(\kappa_{m,n}/2))$ means
$\mu\geq A_{m,n}$, while
$R_{m, n}\geq-\Phi^{-1}(1-(\kappa_{m, n}/2))$ means
$\mu\leq B_{m,n}$. Thus, as
$ m \wedge n\rightarrow \infty $,

Next, assume H2 holds. By Theorem 2.3, as
$m\wedge n\rightarrow \infty$, we have

uniformly for
$ 0\leq x=o( (m\wedge n)^{1/6}).$ When
$\kappa_{m,n}$ satisfies (3.7), by the definition of the upper
$\left(1-\frac{\kappa_{m,n}}{2} \right)$-th quantile of standard normal distribution satisfies

which is of order
$o\left((m\wedge n)^{1/6}\right)$. By (3.12), we have

as
$m\wedge n\rightarrow \infty$. This completes the proof of Proposition 3.1.
When
$\{Z_{2,n}, n\geq 0\}$ is an independent copy of
$\{Z_{1,n}, n\geq 0\}$, we can apply Theorems 2.2 and 2.3 to construct confidence intervals for
$\sigma_1. $
Proposition 3.2. Assume H1 or H2 holds, let
$ \kappa_{n,n} \in (0,1) $. Then
$ [A_{n},\, B_n] ,$ with

is a
$ 1-\kappa_{n,n} $ confidence interval for σ 12 for sufficiently large n, where
$\chi_{q }^2(1)$ the q-quantiles for chi-squared distribution with one degree of freedom.
Proof. Assume H1 holds. By Theorem 2.2, as
$ n\rightarrow \infty$, we have

uniformly for
$ 0\leq x=o (\sqrt{\log n } ).$ Then, applying the last equality to (3.13), we have, as
$ n\rightarrow \infty $,

which implies
$\sigma_1^2 \in[A_{n},B_n]$ with probability
$1-\kappa_{n,n}$ for n large enough.
If H2 holds, analogous arguments apply. This completes the proof of Proposition 3.2.
3.2. Numerical simulation
We now present numerical simulations validating Theorems 2.1–2.3. Let
$(X_{1,n,i})_{n\geq0,i\geq1}$ and
$(X_{2,n,i})_{n\geq0,i\geq1}$ follow the distributions:


Similarly,
$(Y_{1,n})_{n\geq0}$ and
$(Y_{2,n})_{n\geq0}$ follow Poisson distributions:

where
$\lambda(\xi_{1,n})=2{{\xi}_{1,n}}+1$ and
$\lambda(\xi_{2,n})=3{{\xi}_{2,n}}+0.5$.
$\xi_{1,n} \text{and }\xi_{2,n}$ follow uniform distributions
$U(0,1)$ and
$U(0,0.5)$, respectively. The computed parameters are
$\mu_1 = 0.3863, \sigma_1^2 = 0.0391$, and
$\mu_2 = 0.2781, \sigma_2^2 = 0.081$. In the theoretical proofs, we assume initial population sizes
$Z_{1,0}=Z_{2,0}=1.$ for simplicity. However, any finite values of
$Z_{1,0}, Z_{2,0}$ would not affect the theoretical conclusions. To obtain better simulation performance, we set
$Z_{1,0}=Z_{2,0}=5$ and conducted numerical experiments with environmental correlation coefficients
$\rho = 0, 0.5, \text{and }-0.5$. For the numerical verification of Theorems 2.2 and 2.3, we performed 3000 simulation trials with
$m\wedge n= 50$ generations of offspring reproduction.
Figure 1 demonstrates the convergence of the empirical distribution of
$R_{m,n}$ to the standard normal distribution. As
$m\wedge n\rightarrow \infty$, the empirical cumulative distribution function approaches the theoretical normal curve across all tested ρ values, validating the CLT (Theorem 2.1).
Figure 2 illustrates the non-uniform Berry–Esseen bound (Theorem 2.2). The upper and lower bounds are demarcated by dashed lines. Specifically, the two dotted lines above and below correspond to
$\Phi(x)-\frac{ C }{ (m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'}}$ and
$\Phi(x)+\frac{ C }{(m\wedge n)^{\delta/2} }\frac{ 1 }{ 1+|x|^{1+\delta'} }$, respectively. The central solid curve represents the standard normal distribution function, while the discrete points within the solid region denote the simulation results.
Figure 3 verifies Cramér’s moderate deviations (Theorem 2.3). The upper and lower dashed lines represent the boundaries
$C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }$ and
$-C \frac{1+ x^3 }{ \sqrt{m \wedge n}\ }$, the middle blue solid line is the simulation result, and the red dashed line serves as the theoretical baseline.
Figures 1–3 confirm the validity of Theorems 2.1–2.3, as the simulations align closely with theoretical predictions.

Figure 1. Central limit theorem.

Figure 2. Non-uniform Berry–Esseen bounds.

Figure 3. Cramér’s moderate deviations.
3.2. Proof of Theorem 2.1
The following random walk related to the branching process will be used in our research. For
$l= 1, 2,$

the random variables
$\{\log m_{l,i-1}\}_{i\geq1}$ are independent and identically distributed, depending only on the environment ξ. Clearly

where
$(W_{l, n})_{n\geq0}$ is non-negative submartingale under the annealed law
$\mathbb{P}$, with respect to the natural filtration

Without loss of generality, we assume that
$m \leq n.$ For the sake of simplicity in notation, in the sequel, denote

We can write
$R_{m,n}$ the following form:

Let

Then
$(N_i)_{1 \leq i \leq n}$ is a finite sequence of centered and independent random variables and satisfies

Furthermore, it is easy to see

as
$m \rightarrow \infty.$
Below we will use the relationship between
$ W_{l,n} $ and
$ \tilde{W}_{l,n}$. The following lemma plays a crucial role, as it demonstrates that
$ W_{l,n} $ almost surely converges.
Lemma 3.3. (cf. [Reference Wang and Liu26, Theorem 3.2 and Lemma 4.1])
Assume the condition (1.4) is satisfied,
$ l=1,2. $ By the martingale convergence theorem, since the submartingale
$ W_{l,n} $ is L 1 bounded under
$ \mathbb{P} _{\xi,Y} $, then

and it takes a value of
$ \left [ 0,\infty \right ) $ and satisfies the following decomposition formula:

Proof of Theorem 2.1
Without loss of generality, we assume that
$m \leq n.$ Recall that
$ 0 \lt \sigma_1, \sigma_2 \lt \infty, $ and equation (3.17). It is worth noting that

We begin with the decomposition formula (3.16). On the one hand, by the CLT for independent random variables, we have
$ \sum_{i=1}^{n+m} \eta_{m,n, i}$ converges in distribution to the standard normal distribution as
$m \rightarrow \infty$. On the other hand, recall that (3.18), we have
$W_{l,n}$ converges to
$W_{l, \infty}$ as
$n \rightarrow \infty $, and it is known that
$ \mathbb{E}W_{l,n} \lt \infty,$ almost surely. Moreover, since
$ p_{0}=0,$ almost surely, the condition (1.5) and the decomposition formula (3.19), we can obtain
$ W_{l,n} \ge \tilde{W}_{l,n} \gt 0, \ a.s. $ Therefore, we obtain in that

as
$m \rightarrow \infty$.
Combining the above results, we see that
$R_{m,n}$ converges in distribution to the normal distribution. This completes the proof of Theorem 2.1.
4. Proof of Theorem 2.2
In the proof of Theorem 2.2, we require the following non-uniform Berry–Esseen bound derived by Bikelis [Reference Bikelis2]. For more general results, see Chen and Shao [Reference Chen and Shao4].
Lemma 4.1. Let
$(X_{i})_{1\leq i \leq n}$ be independent random variables satisfying
$\mathbb{E}X_{i}=0$ and
$\mathbb{E}\left|X_{i}\right|^{2+\delta} \lt \infty$ for some positive constant
$\delta \in(0,1]$ and all
$1\leq i \leq n$. Assume that
$\sum_{i=1}^{n}\mathbb{E}X_{i}^{2}=1$. Then, for all
$x \in \mathbb{R},$

Next we will explore the (conditional) Laplace transforms of
$W_{1, \infty}$ and
$W_{2, \infty}$, for all
$t\geq 0$,


Since
$ W _{i,\infty } \gt \tilde{W} _{i,\infty } \gt 0 $, it follows that
$\phi_i (t) \le \tilde{\phi}_{i}(t)$. We have the following bounds for
$\tilde{\phi}_{i}(t), i=1, 2$, as
$t \rightarrow \infty.$
Lemma 4.2. Assume that conditions A1 and A2 are satisfied. Then for
$i=1,2,$ it holds

Here, we use results from Fan et al. [Reference Fan, Hu, Wu and Ye6]. In the earlier work of Grama et al. in [Reference Grama, Liu and Miqueu7](see theorem 3.1), they established an upper bound for
$\tilde\phi(t)$, which states that
$\tilde\phi(t)\le Ct^\alpha$ for t > 0, where α is a positive constant. This upper bound is superior to the one referenced in our Lemma 4.2. However, theorem 3.1 in Grama’s work requires condition A3, whereas our condition A1 is weaker. Therefore, we cannot directly apply the conclusions from Grama et al.’s work.
Next, we obtain the following results regarding the Lp moments of
$\log W_{i, n}$ and
$\log W_{i, \infty}$. Wang et al. [Reference Wang and Liu27, Lemma 3.2] have previously demonstrated this for the case of
$q\in \left(1, 1+ \delta/2\right)$. Our results extend their findings to the range
$q\in \left(1, 1+ \delta\right)$.
Lemma 4.3. Assume conditions A1and A2 are satisfied, and there is a constant ϵ > 0 such that
$ \mathbb{E}\left ( \frac{Y_{i,0} }{m_{i,0}} \right ) ^{\epsilon } \lt \infty , i=1,2 $. Then, for
$i=1, 2$ and
$\ q \in (1, 1+\delta)$, the following two inequalities hold

Proof. Set
$i=1, 2.$ We decompose
$\mathbb{E}|\log W_{i, \infty}|^q$ as follows

For the first term in (4.21), it is crucial to note that there exists a constant C > 0 such that
$\left|\log x\right|^q1_{\left \{x \gt 1 \right \} } \le Cx^{\epsilon}$ holds for any x > 0. Therefore, we have

Observe that
$p_{0} =0, a.s.$ and σ > 0 imply
$m_{0} \gt 1$, thus
$ \mathbb{E}m_{0}^{-\epsilon } \lt 1$. From the Fatou’s lemma and the work of Wang and Liu [Reference Wang and Liu27], we can deduce that under the conditions of Lemma 4.3, we have
$ \mathbb{E}W_{i, \infty}^{\epsilon } \lt \infty . $ Thus,

For the second term, by Markov’s inequality and
$\phi_i (t) \le \tilde{\phi}_{i}(t)$, we have

Clearly,

Based on Lemma 4.2 and
$q \lt 1+\delta$, we can derive the following:

Substituting (4.25) and (4.26) into (4.24), we obtain

Therefore, by (4.21), (4.23), and (4.27), we obtain the first conclusion in (4.20).
Applying a similar truncation as
$\mathbb{E}\left| \log W_{i, \infty}\right|^q$, we give a proof for the second conclusion in (4.20). Using the result in [Reference Wang and Liu26], we obtain

Since
$x\mapsto\left| \log^q(x) \mathbf{1}_{\{x\leq1\}}\right|, q \gt 1,$ is a decreasing function, and we have
$ W_{i,n} \ge \tilde{W} _{i,n} $, so

For the last inequality, see [Reference Fan, Hu, Wu and Ye6]. Combining the above results, we see that
$ \sup_{n\in \mathbb{N}}\mathbb{E}\left|\log W_{i,n}\right|^q \lt \infty.$ This completes the proof of Lemma 4.3.
Lemma 4.4. Assume that conditions A1, A2, and A3 are satisfied, then there exists a constant
$\gamma\in(0,1)$, such that

Proof. Let’s first prove the case when i = 1. Since
$ \lt log_ \gt \lt /log_ \gt {W_{1,n+1} } - \lt log_ \gt \lt /log_ \gt {W_{1,n} }=\log\left ( 1+\eta _{1,n} \right ), $ where

Under
$\mathbb{P_{\xi}}$, the sequence
$ \left \{\frac{X_{1,n,i} }{m_{1,n} }-1 \right \}_{i\ge 1} $ consists of i.i.d. random variables with zero mean, independent from
$\{Z_{1,n}\}$, and the sequence
$ \left \{\frac{Y_{1,n}}{m_{1,n}} \right\}$ is also independent from
$\{Z_{1,n}\}$. Choose
$ p\in \left ( 1,2 \right ) $ such that A2 and A3 hold. Using the convexity inequality
$ \left | x+y \right | ^{p} \le 2^{p-1} \left ( \left | x \right |^{p} + \left | y \right |^{p}\right )$ and Zygmund inequality, we get

By Grama et al. [Reference Grama, Liu and Miqueu7], for p > 1, we have
$ \mathbb{E}\tilde{Z} _{1,n} ^{1-p} \le \left ( \mathbb{E} \tilde{Z} _{1,1} ^{1-p} \right )^{n} $ because of
$ Z_{1,n} \ge \tilde{Z} _{1,n} $ and p > 1, we can obtain

Bring (4.29) into (4.28), we can obtain

where
$\delta _{1}=\left (\mathbb{E}\left [ m_{0} \left ( 1-p \right ) \right ] \right ) ^{1/p} \in \left ( 0,1 \right ),$

Fix
$ M \in \left ( 0,1 \right ) $. By decomposition and standard truncation, we have

It is obvious that there exists a constant C > 0 such that for all
$ x \gt -M, $
$ \left | \log(1 + x) \right | \le C\left | x \right | $. By (4.30), we get

By Lemma 4.3, for any
$ r \in(0, p)$ and under the conditions of Lemma 4.4, we have

Let
$ r,s \gt 1 $ satisfy
$ \frac{1}{r}+\frac{1}{s} =1 $. By Hölder’s inequality and Markov’s inequality, we have

Combining with (4.31) and (4.32), we obtain

By the triangle inequality, for all
$ k \in \mathbb{N} $, we have

Letting
$ k\to \infty $ and applying Fatou’s lemma, we obtain
$ \mathbb{E}|\log W_{1,\infty}-\log W_{1,n}| \lt C_{1}\delta _{1}^{n}.$ Similar to the proof above, we can obtain

Then, we can obtain

The following lemma plays a crucial role in the proof of Theorem 2.2.
Lemma 4.5. Assume that conditions A1, A2, and A3 are satisfied. Let
$\delta'$ be a constant such that
$\delta' \in (0, \delta).$ Then for all
$ x \in \mathbb{R} $,

and

Proof. We prove only (4.33), the same method applies to (4.34). Without loss of generality, assume that
$ m\leq n$. For all
$x \in \mathbb{R},$ the following inequality holds,

where

We have known that
$Z_{1,n} \geq 1$
$\mathbb{P}$- almost surely and
$ V_{m,n, \rho} \asymp m^{-1/2}$ as
$m\rightarrow \infty$, for some positive constant C such that

First, we prove (4.33) when
$x \leq - C m^{1/2}.$ From the inequality above, we deduce

hence
$P_1=0 $. For
$P_2,$ note that

thus, by Lemma 4.1, Markov’s inequality, and
$\mathbb{E} W_{2,m} \lt \infty $, we can obtain that for all
$ x \leq -C m^{1/2},$

For I 1, since
$ V_{m,n,\rho} \asymp m^{-1/2}$ as
$m \rightarrow \infty$, by the inequality

where

Applying Lemma 4.1 , we can obtain

For I 1, we can easily obtain
$ \exp\left \{-\frac{1}{4}m \right \} m^{\delta / 2}\left ( 1+Cm^{2+\delta} \right )= o \left ( 1 \right ) $ as
$ m \rightarrow \infty.$ Thus,

For the above reasons, we show that

Hence, inequality (4.33) holds for all
$x \leq - C m^{1/2}. $
Next, we show that inequality (4.33) holds for all
$x\geq C m^{1/2}$. By Lemma 4.1 and the inequality

we establish that for all
$x\geq 0,$

To complete the proof, we now show that (4.33) holds for
$|x| \lt C m^{1/2}$. Consider the following notations, for all
$0\leq k \leq m-1,$

Let
$ \alpha_{m}= m^{-\delta/2}$ and
$k=[ m^{1-\delta/2} \,]$, where
$ [t]$ denotes the largest integer less than t. From equation (3.16), we deduce that for all
$ x \in \mathbb{R}$,

We first provide an estimation for the first term on the RHS of (4.40). Let

Due to the independence between
$ T_{m, n,k} $ and
$(\tilde{T}_{m,n, k},H_{m,n,k}) , $ we have

Denote
$C_{m,n,k}^2= \textrm{Var} (T_{m,n,k}), $ then it holds
$ C_{m,n,k}= 1 + O(k/n)\nearrow 1 $ as
$ m \rightarrow \infty.$ By Lemma 4.1, for all
$x \in \mathbb{R},$ we have

By the mean value theorem, for all
$x \in \mathbb{R},$

Combining (4.42) and (4.43), we deduce that for all
$x \in \mathbb{R},$

Therefore, we have for all
$x \in \mathbb{R},$

where

and

For J 1, we have for all
$x \in \mathbb{R},$

Then by the mean value theorem, we have for all
$x \in \mathbb{R},$

thus

where

and

Based on Lemma 4.3, it is evident that for all
$x \in \mathbb{R},$

For
$J_{12},$ we can make the following estimation, for all
$x \in \mathbb{R},$

Denote
$\tilde{C}_{m,n,k}^2= \textrm{Var}(\tilde{T}_{m,n, k} )$, then we can establish that
$ \tilde{C}_{m,n,k}^2\asymp \frac{1}{m^{\delta/2}}.$ Now, let
$\delta' \in (0, \delta)$. Applying Lemma 4.1, we can conclude that for all
$x \in \mathbb{R},$

Let
$\tau= 1+ \frac{\delta + \delta'}{2+2\delta-\delta'}.$ We have the following relationship:

Applying Lemma 4.3,we have

Thus

Using Hölder’s inequality and making ι satisfying
$\frac{1}{\tau} + \frac{1}{\iota}=1$, we have

Combining inequalities (4.48) and (4.49), we have for all
$|x| \leq C m^{1/2},$

For
$J_{13},$ we have for all
$x \in \mathbb{R},$

Let
$p' = 1 + \delta/2$, by Markov’s inequality and Lemma 4.3, for all
$|x| \leq C m^{1/2},$ we have

and, similarly to (4.51) with
$p'' =\frac{1}{2}(\delta+\delta') ,$

Hence, we have for all
$|x| \leq C m^{1/2}, $

Substituting (4.47), (4.50), and (4.52) into (4.46), for all
$|x| \leq C m^{1/2},$ we conclude

Next, we consider J 2. By an argument similar to the proof of (4.48), we can conclude that for all
$|x| \leq C m^{1/2}, $

For J 3, using arguments similar to those in (4.48) and (4.51), we obtain for all
$ |x| \leq C m^{1/2}, $

Substituting (4.53)–(4.55) into (4.45), for all
$|x| \leq C m^{1/2},$ we have

We now bound the tail probability
$\mathbb{P}\left(|D_{m, n , k}|\geq \alpha_{m}\right)$. By Markov’s inequality and Lemma 4.4, there exists a constant
$\gamma \in (0,1)$ such that for all
$ - m \lt x \lt m$,

The last inequality follows because
$ m^{\delta-1 }\left ( 1+m ^{2+\delta }\right ) \gamma ^{m} =o\left ( 1 \right ) ,m\rightarrow \infty. $ Combining (4.40), (4.56) and (4.57), we conclude that (4.33) for all
$|x| \leq C m^{1/2}. $ This completes the proof of Lemma 4.5.
$\hfill\square$
Proof of Theorem 2.2
Notice that

By Lemma 4.1 and the fact that
$V_{m,n, \rho} \asymp \sqrt{m^{-1 } + n^{-1 } } $, we can establish the following result for all
$x \in \mathbb{R},$

Combining (4.59) with Lemma 4.5 and substituting into (4.58), for all
$x \in \mathbb{R},$ we obtain

This completes the proof of Theorem 2.2.
5. Proof of Theorem 2.3
To prove Theorem 2.3, we first establish the existence of a positive-order α > 0 harmonic moment concerning the BPIRE for both
$W_{1, n}$ and
$W_{2, m}$. Additionally, we will make use of the lemma in [Reference Fan, Hu, Wu and Ye6].
Lemma 5.1. Assume A3, A4, and A5 hold. There exists a constant
$ a_0 \gt 0$ such that for all
$\alpha \in (0, a_0),$ the following inequalities hold

and

Proof. Let
$i=1, 2.$ By the fact that

we obtain

where Γ is the gamma function.
Since
$0\leq \phi_i(t)\leq 1 \text{for } t \geq 0$, the first term in (5.63) satisfies for any α > 0,

For the second term in (5.63), by (4.56) in [Reference Fan, Hu, Wu and Ye6] and lemma 4.1 in [Reference Liu20], if
$0 \lt \alpha \lt a_{0}$ we have

Combining (5.63), (5.64), and (5.65), we conclude that (5.61) holds.
Now, we prove inequality (5.62). Note that the function
$x\mapsto x^{-\alpha}\ (\alpha \gt 0,\ x \gt 0)$ is non-negative convex. Then by lemma 2.1 in [Reference Huang and Liu9], we have

This completes the proof of Lemma 5.1.
Lemma 5.2. Assume A3, A4, and A5 hold. Then for all
$ |x| \leq \sqrt{\log (m \wedge n)} ,$

and

Proof. Since A4 and A5 imply A1 and A2, inequalities (5.66) and (5.67) follow directly from Lemma 4.5 for
$ |x| \leq 1.$ Therefore, we need to establish the inequalities
$ 1 \leq |x| \leq \sqrt{\log (m \wedge n)} .$ Additionally, we shall only present a proof for (5.66) with
$ 1 \leq |x| \leq \sqrt{\log (m \wedge n)}$, as the proof for (5.67) follows a similar approach.
Without loss of generality, we assume that
$ m\leq n$. For
$|x| \leq m^{1/6},$ using Cramér’s moderate deviations for independent random variables, we derive

For
$|x| \gt m^{1/6}$, by Bernstein’s inequality for independent random variables,

Combing (5.68) and (5.69), we obtain

From the last inequality, for all
$ x \in \mathbb{R} ,$ we deduce

Therefore, we have for all
$x \in \mathbb{R},$

where


and

Denote
$\tilde{C}_{m,n,k}^2= \textrm{Var}(\tilde{Y}_{m,n, k} )$, then it holds
$\tilde{C}_{m,n,k}^2=O(1/\sqrt{m})$ as
$m\rightarrow \infty$. By the mean value theorem, the upper bound of J 1 satisfies for
$1\leq |x| \leq \sqrt{\log m},$

Therefore

where

and

By Lemma 4.3, we obtain

For all
$1\leq |x| \leq \sqrt{\log m},$

For
$J_{12},$ the following bound holds for
$1\leq |x| \leq \sqrt{\log m},$

By Bernstein’s inequality, for all
$x \in \mathbb{R},$

and by Cauchy–Schwarz inequality,

Hence, for all
$1\leq|x| \leq \sqrt{\ln m } ,$ we have

For J 13, the following inequality holds for
$1 \leq |x| \leq \sqrt{\log m},$

Note that
$V_{m,n,\rho}\asymp \frac{1}{\sqrt{m} }$ and
$\tilde{C}_{m,n,k} \asymp \frac{1}{m^{1/4} }$. It is evident that for all
$1\leq |x| \leq \sqrt{\ln m},$

By Lemma 5.1 and Markov’s inequality, for
$1\leq |x| \leq \sqrt{\log m},$

for C 0 sufficiently large.
Similarly, we have
$T_2\leq C \exp \{-\frac12 x^2 \}.$ Thus, for
$1\leq |x| \leq \sqrt{\log m},$

By Lemma 5.1 and the inequality
$|\log x|^2 \leq C_\alpha (x+ x^{-\alpha})$ for all
$\alpha, x \gt 0$, we observe that

By Markov’s inequality and Cauchy–Schwarz inequality, for all
$1\leq |x| \leq \sqrt{\log m},$ we have

for C 0 sufficiently large. Thus, we have for all
$1\leq |x| \leq \sqrt{\log m},$

From (5.71), for all
$1 \leq |x| \leq \sqrt{\log m},$ we have

For J 2, we arrive at the result that holds for all
$1 \leq |x| \leq \sqrt{\ln m},$

By a similar argument as in the previous cases, for J 3, for all
$1 \leq |x| \leq \sqrt{\log m},$ we have

Substituting (5.75)–(5.77) into (5.70), we get for all
$1\leq |x| \leq \sqrt{\log m},$

Following the method of (4.57), the second term on the RHS of (4.40) satisfies for
$1 \leq |x| \leq \sqrt{\log m},$

Combining (4.40), (5.78) and (5.79), we conclude that (5.66) holds for
$1\leq |x| \leq \sqrt{\log m}.$ This completes the proof of Lemma 5.2.
$\hfill\square$
Proof of Theorem 2.3
We present a proof of Theorem 2.3 for the case of
$\frac{\mathbb{P} ( R_{m,n} \geq x )}{1-\Phi(x)},$
$ x \geq 0.$ The case
$\frac{\mathbb{P} ( -R_{m,n} \geq x )}{\Phi(-x)}$ can be dealt with similarly due to the symmetry between m and n. We prove Lemmas 5.3 and 5.4, then combine them to establish Theorem 2.3. To avoid trivial cases, we assume that
$m \wedge n \geq 2$.
The next following gives an upper bound for Theorem 2.3.
Lemma 5.3. Assume A3, A4, and A5 hold. Then, for all
$0 \leq x \leq c \, \sqrt{m \wedge n} ,$ we have

Proof. We will begin by examining the situation when
$0 \leq x \leq \sqrt{\log(m \wedge n)}$. Notice that

For the first term of (5.81), we can apply Cramér’s moderate deviations for independent random variables (refer to inequality (1) in [Reference Fan, Grama and Liu5]) to the last equality. We obtain for all
$0 \leq x \leq c \sqrt{m \wedge n },$

Applying Lemma 5.2 and inequality (4.37), we can obtain the following for the second term when
$0 \leq x \leq \sqrt{\log(m \wedge n)}$

Since
$1+x \leq e^x,$ the above inequalities imply

Thus, we obtain (5.80) holds for all
$0 \leq x \leq \sqrt{\ln(m \wedge n)} $.
Next, we consider the case
$\sqrt{\log(m \wedge n)} \leq x \leq c\, \sqrt{m\wedge n}$. Clearly, it holds for all
$x \in \mathbb{R},$

where

where α given by Lemma 5.1.
Now, let us provide estimations for
$I_1, I_2,$ and I 3. Condition A4 implies that
$\sum_{i=1}^{n+m} \eta_{m,n,i}$ is a sum of independent random variables with finite moment generating functions. Using Cramér’s moderate deviations for independent random variables (cf. [Reference Fan, Grama and Liu5]), we can deduce the following for all
$1\leq x \leq c \sqrt{m\wedge n},$

By inequality (4.37), for all
$x\geq 1$ and
$\varepsilon_n \in (0, \frac{1}{2}]$, we have

Since
$V_{m,n,\rho}\asymp \frac{1}{\sqrt{m}}$, we have

It holds for all
$1 \leq x \leq c\, \sqrt{m \wedge n} ,$

By Markov’s inequality and (3.18), it is easy to see that for all
$x \geq \sqrt{\log (m \wedge n)} ,$

and

Combining (5.84)–(5.86), we obtain for all
$\sqrt{\log (m \wedge n) }\leq x \leq c\, \sqrt{m \wedge n} ,$

which implies the desired inequality for all
$\sqrt{\log (m \wedge n) }\leq x \leq c\, \sqrt{m \wedge n} .$
$\hfill\square$
The following lemma establishes the lower bound in Theorem 2.3.
Lemma 5.4. Assume that conditions A3, A4, and A5 are satisfied. Then for all
$0 \leq x \leq c \, \sqrt{m \wedge n} ,$

Proof. The lower bound can be established following a similar approach to the upper bound. For example, to establish (5.87) for all
$\sqrt{\log(m \wedge n)} \leq x \leq c\sqrt{m \wedge n},$ we can observe that

where

where α given by Lemma 5.1. The remainder of the proof parallels the argument in Lemma 5.3.
Acknowledgements
The authors are grateful to anonymous referees and Professor Quansheng Liu for their very valuable comments and remarks, which significantly contributed to improving the quality of the paper.
Funding statement
This work was supported by the National Natural Science Foundation of China (Grant no. 12271062).