1 Introduction
Thouvenot and Weiss showed in [Reference Thouvenot and Weiss12] that for every aperiodic, probability-preserving system
$(X,{\mathcal B},m,T)$
and for a random variable Y, there exist a function
$f:X\to {\mathbb R}$
and a sequence
$a_n\to \infty $
such that

This result means that any distribution can be approximated by observations of an aperiodic, probability-preserving system. See also [Reference Aaronson and Weiss1] for a refinement of this distributional convergence result for positive random variables and the subsequent [Reference Gouëzel6] which is concerned with the possible growth rate of the normalizing constants
$a_n$
. The results mentioned above were preceded by research into central limit theorems (CLTs) in dynamical systems with convergence towards a normal law; see, for example, [Reference Burton and Denker4, Reference Volný13].
Given a stochastic process
$Y=(Y(t)) _{t\in {\mathbb R}}$
whose sample paths are in a Polish space
$\mathcal {D}$
, a natural question that arises is whether we can simulate it using our prescribed dynamical system. That is, do there exist a measurable function
$f:X\to {\mathbb R}$
and normalizing constants
$a_n$
and
$b_n$
such that the processes
$Y_n:X\to \mathcal {D}$
defined by
$Y_n(t)(x)=({1}/{a_n})(\sum _{k=0}^{[nt]}f\circ T^k(x)-b_{[nt]})$
converge in distribution to Y?
As noted by Gouëzel in [Reference Gouëzel6], by a famous result of Lamperti (see [Reference Bingham, Goldie and Teugels3, Theorem 8.5.3]), any process Y which can be simulated in this manner must be self-similar and the normalizing constants need to be of the form
$a_n=n^{\alpha }L(n)$
where
$L(n)$
is a slowly varying function and
$\alpha $
is the self-similarity index of the process. Perhaps due to this, results about the simulation of processes are rather scarce; to the best of our knowledge the only such result is [Reference Volný13], where the second author has answered a question of Burton and Denker [Reference Burton and Denker4] and shown that every aperiodic, probability-preserving system can simulate a Brownian motion with classical normalizing constants
$a_n=\sqrt {n}$
.
An important subclass of self-similar processes is the class of
$\alpha $
-stable Lévy motions which we describe in the next subsection. These include Brownian motion
$(\alpha =2)$
and Cauchy–Lévy motion (
$\alpha =1$
) which is a process with independent increments which are Cauchy-distributed and are often used to model heavy-tailed phenomena.
In this work we show that given an aperiodic, ergodic, probability-preserving transformation
$(X,{\mathcal B},m,T)$
:
-
• every
$\alpha $ -stable Lévy motion with
$\alpha \in (0,1)$ can be simulated by this transformation;
-
• every symmetric
$\alpha $ -stable Lévy motion can be simulated using this transformation.
One may ask about general
$\alpha $
-stable Lévy motions when
$\alpha \in [1,2)$
. In this regard we extend the results of [Reference Kosloff and Volný9] and show a classical CLT result for any
$\alpha $
-stable distribution when
$\alpha \neq 1$
.
From a bird’s-eye view, the methods are similar to those in [Reference Kosloff and Volný9, Reference Volný13] in the sense that the process is constructed by a sum of coboundaries and that in any ergodic and aperiodic dynamical system and for a natural number n there is a function f such that the sequence of
$f, f\circ T, \ldots , f\circ T^n$
has a given distribution of a discrete-valued independent and identically distributed (i.i.d.) sequence
$X_0, \ldots , X_n$
(Proposition 2 in [Reference Kosloff and Volný8]). We remark that our work shows that any ergodic dynamical system can simulate these
$\alpha $
-stable processes but in order to have algorithms which converge fast one may want to choose a special dynamical system; such works in the context of
$\alpha $
-stable processes were carried out, for example, in [Reference Gottwald and Melbourne5, Reference Wu, Michailidis and Zhang14].
The coboundaries used in the preceding papers naturally lead to a convergence towards symmetric laws. A natural challenge, which is treated in full generality in this work, is to get CLT convergence with i.i.d. scaling towards skewed stable limits. We note that the case where
$1\leq \alpha <2$
(Theorem 2.10) is especially challenging.
The invariance principle was studied in [Reference Volný13] only where the structure of Hilbert spaces could be used and the convergence is with respect to the metric of uniform convergence in the space of continuous functions. The methods of this paper are different, and even in the case of a symmetric stable process limit the function here is different and makes use of linear combinations of skewed stable functions.
1.1 Definitions and statement of the theorems
A random variable Y is stable if there exist a sequence
$Z_1,Z_2,\ldots $
of i.i.d. random variables and sequences
$a_n,b_n$
such that

In other words, Y arises as a distributional limit of a CLT; see [Reference Ibragimov and Linnik7]. Furthermore, in this case
$b_n$
is regularly varying of index
${1}/{\alpha }$
which implies that
$b_n=n^{1/\alpha }L(n)$
, where
$L(n)$
is a slowly varying function. A stable distribution is uniquely defined by its characteristic function (Fourier transform). Namely, a random variable is
$\alpha $
-stable,
$0<\alpha \leq 2$
, if there exist
$\sigma>0$
,
$\beta \in [-1,1]$
and
$\mu \in {\mathbb R}$
such that for all
$\theta \in {\mathbb R}$
,

The constant
$\sigma>0$
is the dispersion parameter and
$\beta $
is the skewness parameter. In this case we will say that Y is an
$\alpha $
-stable random variable with dispersion parameter
$\sigma $
, skewness parameter
$\beta $
and shift parameter
$\mu $
, or in short Y is an
$S_\alpha (\sigma ,\beta ,\mu )$
random variable. If
$\mu =\beta =0$
and
$\sigma>0$
then the random variable is symmetric
$\alpha $
-stable and we will say that Y is
$S\alpha S(\sigma )$
.
A probability-preserving dynamical system is a quadruplet
$(\mathcal {X},{\mathcal B},m,T)$
where
$(\mathcal {X},{\mathcal B},m)$
is a standard probability space, T is a measurable self-map of X and
$m\circ T^{-1}=m$
. The system is aperiodic if the collection of all periodic points is a null set. It is ergodic if every T-invariant set is either a null or a conull set. Given a function
$f:X\to {\mathbb R}$
, we write
$S_n(f):=\sum _{k=0}^{n-1}f\circ T^k$
for the corresponding random walk.
Recall that if
$Y_n$
and Y are random variables taking values in a Polish space
$\mathbb {X}$
, then
$Y_n$
converges to Y in distribution if for every continuous function
$G:\mathbb {X}\to {\mathbb R}$
,

Here
$\mathbb {E}$
denotes the expectation with respect to the relevant probability measure of the space on which the random variable is defined on.
Theorem 1.1. (See Theorem 2.10)
For every ergodic and aperiodic probability-preserving system
$(\mathcal {X},{\mathcal B},m,T)$
,
$\alpha>1$
and
$\beta \in [-1,1]$
, there exist a function
$f:X\to {\mathbb R}$
and
$B_n\to \infty $
such that

A process
${\mathbb {W}}=\big ({\mathbb {W}}_s\big )_{s\in [0,1]}$
is an
$S_\alpha (\sigma ,\beta ,0)$
Lévy motion if it has independent increments and for all
$0\leq s<t\leq 1$
,
${\mathbb {W}}_t-{\mathbb {W}}_s$
is
$S_\alpha (\sigma \kern -2pt\sqrt [\alpha ]{t-s},\beta ,0)$
distributed. The existence of an
$S_\alpha (\sigma ,\beta ,0)$
-stable motion can be demonstrated via a functional CLT (also called a weak invariance principle); the details given below appear in [Reference Resnick10].
Consider the vector space
$D([0,1])$
of functions
$f:[0,1]\to {\mathbb R}$
which are right-continuous with left limits, also known as càdlàg functions. Equipped with the Skorohod
$J_1$
topology,
$D([0,1])$
is a Polish space. Now a natural construction of a distribution on
$D([0,1])$
is to take
$X_1,X_2,\ldots ,$
an i.i.d. sequence of random variables and
$a_n>0$
and define a
$D([0,1])$
-valued random variable
${\mathbb {W}}_n$
via

where
$S_n(X):=\sum _{k=1}^nX_k$
and
$[\cdot ]$
is the floor function. By [Reference Resnick10, Corollary 7.1.], if
$X_i$
are
$S_\alpha (\sigma ,\beta ,0)$
and
$a_n=n^{-1/\alpha }$
, then
${\mathbb {W}}_n$
converges in distribution (as a sequence of random variables on the Polish space
$D([0,1])$
with the
$J_1$
topology), its limit being an
$S_\alpha (\sigma ,\beta ,0)$
Lévy motion. The main result of this work is such functional CLT results in the setting of dynamical systems.
Theorem 1.2. Let
$(\mathcal {X},{\mathcal B},m,T)$
be an ergodic and aperiodic probability-preserving system.
-
(Theorem 2.5) For every
$\alpha \in (0,1)$ ,
$\sigma>0$ and
$\beta \in [-1,1]$ , there exists
$f:X\to {\mathbb R}$ such that
${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$ converges in distribution to an
$S_\alpha (\sigma ,\beta ,0)$ Lévy motion.
-
(Theorem 2.6) For every
$\alpha \in [1,2)$ and
$\sigma>0$ , there exists
$f:X\to {\mathbb R}$ such that
${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$ converges in distribution to an
$S_\alpha S(\sigma )$ Lévy motion.
We remark that while the results in Theorem 2.5 provide a function f whose partial sum process
${\mathbb {W}}_n(f)$
converges to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
Lévy motion, the scaling property of
$\alpha $
-stable distributions gives that, writing
$c:={\sigma }/{\kern -2pt\sqrt [\alpha ]{\ln (2)}}$
,
${\mathbb {W}}_n(cf)$
converges to an
$S_\alpha (\sigma ,\beta ,0)$
Lévy motion. A similar remark is true with regard to Theorem 2.6.
1.2 Notation
Here and throughout,
$\log (x)$
denotes the logarithm of x in base 2, and similarly
$\ln (x)$
is the natural logarithm of x.
Given two non-negative sequences
$a_n$
and
$b_n$
, we write
$a_n\lesssim b_n$
if there exists
$C>0$
such that
$a_n\leq Cb_n$
for all
$n\in {\mathbb N}$
; and if, in addition,
$b_n>0$
for all n then we write
$a_n\sim b_n$
if
$\lim _{n\to \infty }({a_n}/{b_n})=1$
,
For a function
$f:X\to {\mathbb R}$
and
$p>0$
,
$\|f\|_p:=(\int |f|^p\,dm)^{1/p}$
.
2 Construction of the function
2.1 Target distributions
Let
$(\Omega ,\mathcal {F},{\mathbb P})$
be a probability space. Let
$\{X_k(m):\ k,m\in {\mathbb N}\}$
be independent random variables so that for every
$k\in {\mathbb N}$
,
$X_k(1),X_k(2),X_k(3),\ldots $
are i.i.d.
$S_\alpha (\sigma _k,1,0)$
random variables with
$\sigma _k^\alpha ={1}/{k}$
.
For every
$k,m\in {\mathbb N}$
, define
$Y_k(m)=X_k(m)1_{[2^k\leq X_k(m)\leq 4^k]}$
and its discretization on a grid of scale
$4^{-k}$
defined by

The following fact easily follows from the definitions.
Fact 2.1. For every
$k\in {\mathbb N}$
,
$Z_k(1),Z_k(2),\ldots $
are i.i.d. random variables supported on the finite set
$\{2^k,2^k+4^{-k},\ldots ,4^k\}$
, and for all
$m\in {\mathbb N}$
,

The construction of the cocycle will hinge on realizing a triangular array of the Z random variables in a dynamical system.
2.2 Construction of the function
Let
$(\mathcal {X},{\mathcal B},m,T)$
be an ergodic, aperiodic, probability-preserving system. We first recall some definitions and the copying lemma of [Reference Kosloff and Volný8] and its application as in [Reference Kosloff and Volný9].
A finite partition of
$\mathcal {X}$
is measurable if all of its pieces (atoms) are Borel-measurable. Recall that a finite sequence of random variables
$X_1,\ldots ,X_n:\mathcal {X}\to {\mathbb R}$
, each taking finitely many values, is independent of a finite partition
$\mathcal {P}=(P)_{P\in \mathcal {P}}$
if for all
$s\in {\mathbb R}^n$
and
$P\in \mathcal {P}$
,

We will embed the triangular array using the following key proposition.
Proposition 2.2. [Reference Kosloff and Volný8, Proposition 2]
Let
$(\mathcal {X},{\mathcal B},m,T)$
be an aperiodic, ergodic, probability-preserving transformation and
$\mathcal {P}$
a finite-measurable partition of
$\mathcal {X}$
. For every finite set A and
$U_1,U_2,\ldots ,U_n$
an i.i.d. sequence of A-valued random variables, there exists
$f:\mathcal {X}\to A$
such that
$(f\circ T^j)_{j=0}^{n-1}$
is distributed as
$(U_j)_{j=1}^n$
and
$(f\circ T^j)_{j=0}^{n-1}$
is independent of
$\mathcal {P}$
.
Using this, we deduce the following corollary.
Corollary 2.3. Let
$(\mathcal {X},{\mathcal B},m,T)$
be an aperiodic, ergodic, probability-preserving transformation and
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
be the triangular array from §2.1. There exist functions
$f_k,g_k:\mathcal {X}{\kern-1pt}\to{\kern-1pt} \mathbb {R}$
such that
$(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
and
$(g_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
are independent and each is distributed as
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2}\}}$
.
Proof. The sequence
$(Z_k(m))_{n\in {\mathbb N},1\leq m\leq 2\cdot 4^{k^2}}$
is a sequence of independent random variables and for each k,
$(Z_k(m))_{1\leq m\leq 2\cdot 4^{k^2}}$
are i.i.d. random variables which take finitely many values.
Proceeding verbatim as in the proof of [Reference Kosloff and Volný9, Corollary 4], one obtains a sequence of functions
$f_k:\mathcal {X}\to {\mathbb R}$
such that
$(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2} \}}$
is distributed as
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2}\}}$
. Setting
$g_k=f_k\circ T^{4^{k^2}}$
concludes the proof.
From now on let
$(\mathcal {X},{\mathcal B},m,T)$
be an aperiodic, ergodic dynamical system and
$(f_k)_{k=1}^\infty $
and
$(g_k)_{k=1}^\infty $
the functions from Corollary 2.3.
Lemma 2.4. We have that
$\#\{k\in {\mathbb N}: f_k\neq 0\ \text {or}\ g_k\neq 0\}<\infty $
, m-almost everywhere.
Proof. Since
$f_k$
and
$g_k$
are
$Z_k(1)$
distributed and
$X_k(1)$
is
$S_\alpha (\sigma _k,1,0)$
distributed, it follows from Proposition A.1 that

where C is a global constant which does not depend on k. Using the union bound and stationarity, the right-hand side being summable, the claim follows from the Borel–Cantelli lemma.
In what follows, we assume that
$\alpha \in (0,2)$
is fixed and
$f_k$
and
$g_k$
correspond to the functions in Corollary 2.3. In addition, we write for
$h:\mathcal {X}\to {\mathbb R}$
and
$n\in \mathbb {N}$
,

Define

Note that by Lemma 2.4, f and g are well defined as the sum in their definition is almost surely a sum of finitely many functions. Recall that the (rescaled) partial sum process of a function
$h:\mathcal {X}\to {\mathbb R}$
is

Theorem 2.5. Assume
$0<\alpha <1$
. Fix
$\beta \in [-1,1]$
and define

${\mathbb {W}}_n(h)\Rightarrow ^d {\mathbb {W}}$
where
${\mathbb {W}}$
is an
$S_\alpha (\ln (2),\beta ,0)$
Lévy stable motion.
We also have a functional CLT version for general
$\alpha \in (0,2)$
when the limit is
$S\alpha S$
. Recall that the functions
$f_k$
and
$g_k$
are related by
$g_k=f_k\circ T^{4^{k^2}}$
.
Theorem 2.6. Assume
$\alpha \in [1,2)$
. Define

${\mathbb {W}}_n(h)\Rightarrow ^d{\mathbb {W}}$
where
${\mathbb {W}}$
is an
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
Lévy motion.
2.3 General CLT for
$\alpha>1$
Recall that a coboundary for a measure-preserving transformation is a function H such that there exists a function G, called a transfer function, such that
$H=G-G\circ T$
. The resulting cocycle (sum process) of the coboundaries
$f_k - g_k$
from the proof of Theorem 2.6 converges to a symmetric
$\alpha $
-stable distribution. To get a skewed
$\alpha $
-stable limit we thus use a different kind of coboundaries as described below. Set
$D_k:=4^{\alpha k}$
,

and
$h_k:=f_k-\varphi _k$
. We note that the
$h_k$
and h in this subsection denote different functions than in the previous subsection.
Lemma 2.7. If
$\alpha \in (1,2)$
, then
$\sum _{k=1}^N h_k$
converges in
$L^1(m)$
and almost surely as
$N\to \infty $
.
Proof. By Fubini’s theorem it suffices to show that
$\sum _{k=1}^\infty \int |h_k|\,dm<\infty $
.
To that end, for a fixed k we have

where the last equality is true as T preserves m. Next
$f_k$
and
$Z_k(1)$
are equally distributed and

As
$\alpha>1$
, it follows from this and Corollary A.3 that there exists
$C>0$
such that for all
$k\in {\mathbb N}$
,

We conclude that

Following this, we write
$h=\sum _{k=1}^\infty h_k$
and throughout this subsection and §5, h always corresponds to this function. Note that for every
$k\in {\mathbb N}$
,
$\mathbb {E}(X_k(1)1_{[X_k(1)\leq 2^k]})$
exists, and write

Theorem 2.8. Assume
$\alpha \in (1,2)$
.
$({S_n(h)+B_n})/{n^{1/\alpha }}$
converges in distribution to an
$S_\alpha (\ln (2),1,0))$
random variable.
The following claim gives the asymptotics of
$B_n$
.
Claim 2.9. For every
$\alpha \in (1,2)$
, there exists
$c_\alpha>0$
such that
$B_n=c_\alpha n(\log (n))^{1-{1}/{\alpha }} (1+o(1))$
as
$n\to \infty $
.
Proof. Recall that
$\sigma _k=k^{-1/\alpha }$
. Since
${2^k}/{\sigma _k}\to \infty $
as
$k\to \infty $
, it follows from the monotone convergence theorem that if Z is an
$S_\alpha (1,1,0)$
random variable, then

Now for every k,
$X_k(j)$
and
$\sigma _kZ$
are equally distributed. Consequently,

The claimed asymptotics now follows from this and

Now write

and
$\hat {h}:=\sum _{k=1}^\infty \hat {h}_k$
. Note that
$\hat {h}$
is well defined as for all k,
$\hat {h}_k=h_k\circ T^{4^{k^2}}$
so h is a limit in
$L^1$
by Lemma 2.7.
Theorem 2.10. Assume
$\alpha>1$
. Fix
$\beta \in [-1,1]$
and define

Then
${1}/{n^{1/\alpha }}(S_n(H)+B_n((({1+\beta })/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha }))$
converges in distribution to
$S_\alpha (\ln (2),\beta ,0)$
.
2.4 Strategy of the proof of Theorems 2.5 and 2.6
The proof starts by writing for
$\psi \in \{h,f,g\}$
,

where

Writing
$\|\cdot \|_\infty $
for the supremum norm, we first show that
$\| {\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty $
and
$\| {\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty $
converge to
$0$
in probability, hence the two processes converge to the zero function in the uniform (and consequently the
$J_1$
) topology.
Next we show that
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
converges in distribution (in the
$J_1$
topology) to the correct limiting process.
Finally, we use Slutsky’s theorem, also known as the convergence together lemma, in the (Polish) Skorohod
$J_1$
topology, to deduce the weak convergence result for
${\mathbb {W}}_n(h)$
.
Lemma 2.11. Let
$A_n,B_n$
and W be
$D[0,1]$
-valued processes such that
$A_n\Rightarrow ^d0$
in the uniform topology and
$B_n\Rightarrow W$
in the
$J_1$
topology. Then
$A_n+B_n\Rightarrow ^dW$
in the
$J_1$
topology.
We remark that Lemma 2.11 follows from [Reference Billingsley2, Theorem 3.1.] and the fact that the uniform topology is stronger than the
$J_1$
topology on
$D[0,1]$
.
3 Proof of Theorem 2.5
We carry out the proof strategy as stated in §2.4. In what follows
$(X,{\mathcal B},m,T)$
is an ergodic, aperiodic probability-preserving system,
$\beta \in [-1,1]$
,
$\alpha \in (0,1)$
is fixed and the functions
$f_k$
are as in Theorem 2.5.
This section has two subsections. In the first we prove results on
${\mathbb {W}}_n^{(\mathbf {S})}(f)$
,
${\mathbb {W}}_n^{(\mathbf {M})}$
and
${\mathbb {W}}_n^{(\mathbf {L})}(f)$
. These results combined prove Theorem 2.5 in the totally skewed to the right (
$\beta =1$
) case. In the second subsection we show how to deduce Theorem 2.5 from these results.
3.1 Case
$\beta =1$
Lemma 3.1. We have
$\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(f)\|_\infty \neq 0)=0$
.
Proof. The statement follows from the inclusion

Therefore,

where the last inequality is from the proof of Lemma 2.4. The result now follows since

Lemma 3.2. The random variable
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
converges to
$0$
in measure.
Proof. Recall that for all
$k\in {\mathbb N}$
,
$f_k$
is distributed as
$Z_k(1)$
, whence
$f_k\geq 0$
and

For every
$k,j\in {\mathbb Z}$
,
$f_k\circ T^j$
is distributed as
$Z_k(1)$
and

By Corollary A.2, there exists
$C>0$
such that for all
$k,j\in {\mathbb N}$
,

Consequently,

A standard application of the Markov inequality shows that
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty $
converges to
$0$
in measure, concluding the proof.
The rest of this subsection is concerned with the proof of the following result for
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
.
Proposition 3.3. The random variable
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
converges in distribution to
${\mathbb {W}}$
, an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
Lévy motion.
For
$V\in \{X,Y,Z\}$
and
$k,n\in {\mathbb N}$
, define

We introduce the following
$D[0,1]$
-valued processes on
$(\Omega ,\mathcal {F},{\mathbb P})$
:

The reason for their definition is the following lemma.
Lemma 3.4. The random variables
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
and
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
are equally distributed.
Proof. By the definition of
$f_k$
,
$\{f_k\circ T^{j-1}:\ k\in {\mathbb N},\ 1\leq j\leq 4^k\}$
and
$\{Z_k(j):\ k\in {\mathbb N}, 1\leq j\leq 4^k\}$
are equally distributed.
The function
$G_n: \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}\to D[0,1]$
defined for all
$0\leq t\leq 1$
and
$(x_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k}\in \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}$
by

is continuous.
As
$G_n((f_k\circ T^{j-1})_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(f)$
and similarly
$G_n((Z_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(Z)$
, the claim follows from the continuous mapping theorem.
Using this equality of distributions, it suffices to show that
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
converges in distribution to an
$S_\alpha (1,1,0)$
Lévy motion. This follows from the convergence together lemma (Lemma 2.11) and the following result.
Lemma 3.5. The following two properties are satisfied.
-
(a) The sequence of random variables
$\|W_n^{(M)}(X)-W_n^{(M)}(Z)\|_{\infty}$ converges to
$0$ in measure.
-
(b) The sequence of
$D[0,1]$ valued random variables
$W_n^{(M)}(X)$ converges in distribution to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$ Lévy motion.
Proof of Lemma 3.5(a)
For every
$k,m\in {\mathbb N}$
(noting here that as
$\alpha <1$
, a skewed
$\alpha $
-stable random variable is non-negative),

We deduce from this and the triangle inequality that

We will show that the right-hand side converges to
$0$
in probability.
Firstly
$0<\alpha <1$
, hence for all
$k>({1}/{2\alpha })\log (n)$
,
$n<4^k$
. Consequently, by Fact 2.1,

We conclude that

Secondly,

where

and

Similarly to the proof of Lemma 3.1,

showing that
$\mathrm {III}_n\xrightarrow [n\to \infty ]{}0$
in probability.
We now fix
$\alpha <r<1$
and
$\varepsilon>0$
. Note that by Corollary A.2 there exists a global constant C so that for every k and m,

By Markov’s inequality and the triangle inequality for the r’th moments,

We conclude that
$\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
in probability. Finally, we conclude the proof as we have

and each of the terms on the right-hand side converges to
$0$
in probability.
Proof of Lemma 3.5(b)
For all
$0\leq t\leq 1$

where for
$j\in {\mathbb N}$
,

We claim that
$V_n(1),V_n(2),\ldots ,V_n(n)$
are i.i.d.
$S_\alpha (A_n,1,0)$
random variables with
$\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
Indeed, since
$\alpha <1$
, we deduce that for all
$k>({1}/{2\alpha })\log (n)$
, we have
$4^{k}>n$
. The independence of
$V_n(1),V_n(2),\ldots ,V_n(n)$
readily follows from the independence of
$\{X_k(m):k\in {\mathbb N},\ m\leq 4^{k}\}$
.
Now for all
$1\leq j\leq n$
and
$k\in (({1}/{2\alpha })\log (n),({1}/{\alpha })\log (n)]$
,
$X_k(j)$
is an
$S_\alpha (\sigma _k,1,0)$
random variable with
$(\sigma _k)^\alpha =1/k$
. As
$V_n(j)$
is a sum of independent
$S_\alpha (\sigma _k,1,0)$
random variables (and
$\alpha \neq 1$
), it follows from [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3] that
$V_n(j)$
is
$S_\alpha (A_n,1,0)$
distributed with

We will now conclude the proof. Write
$a_n:={(\ln (2))^{1/\alpha }}/{A_n}$
and define
$W_n(t):=a_n{\mathbb {W}}_n^{(\mathbf {M})}(X)(t)$
so that
$W_n$
is the partial sum process driven by the random variables,
$a_nV_n(1),\ldots , a_nV_n(n)$
.
As the latter are i.i.d.
$S_\alpha (\ln (2),\beta ,0)$
random variables, this shows that
$W_n$
is equally distributed as
$ {\mathbb {W}}_n(V)$
where
$(V(j))_{j=1}^\infty $
are i.i.d.
$S_\alpha (\ln (2),1,0)$
random variables.
By [Reference Resnick10, Corollary 7.1]
${\mathbb {W}}_n(V)$
(and hence
$W_n$
) converges in distribution to an
$S_\alpha (\ln (2),1,0)$
Lévy motion.
Since
${\mathbb {W}}_n^{(\mathbf {M})}(X)=(a_n)^{-1}W_n$
with
$a_n\to 1$
, we conclude that
${\mathbb {W}}_n^{(\mathbf {M})}(X)$
converges in distribution to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
Lévy motion.
3.2 Concluding the proof of Theorem 2.5
We now fix
$\alpha \in (0,1)$
and
$\beta \in [-1,1]$
and set
$h_k$
, h as the functions from Theorem 2.5 corresponding to
$\beta $
. We claim that
$ {\mathbb {W}}_n(h) $
converges in distribution to an
$S_\alpha (\ln (2),\beta ,0)$
Lévy motion.
We deduce this claim from the results on the skewed
$\beta =1$
case via the following lemma.
Lemma 3.6.
-
(a) The sequence of
$D[0,1]\times D[0,1]$ valued random variables
$({\mathbb {W}}_n^{(\mathbf {S})}(f),{\mathbb {W}}_n^{(\mathbf {S})}(g))$ converges in distribution (in the uniform topology) to
$(0,0)$ .
-
(b) The sequence of
$D[0,1]\times D[0,1]$ valued random variables
$({\mathbb {W}}_n^{(\mathbf {M})}(f),{\mathbb {W}}_n^{(\mathbf {M})}(g))$ converges in distribution to
$({\mathbb {W}},{\mathbb {W}}')$ where
${\mathbb {W}},{\mathbb {W}}'$ are independent
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$ Lévy motions.
-
(c) The sequence of
$D[0,1]\times D[0,1]$ valued random variables
$({\mathbb {W}}_n^{(\mathbf {L})}(f),{\mathbb {W}}_n^{(\mathbf {L})}(g))$ converges in distribution (in the uniform topology) to
$(0,0)$ .
Proof. For all
$k\in \mathbb {N}$
,
$f_k$
and
$g_k$
are equally distributed. Following the proofs of Lemmas 3.1 and 3.2 we see that
$ \|W_n^{(\mathbf {S})}(g)\|_\infty $
and
$\|W_n^{(\mathbf {L})}(g)\|_\infty $
tend to
$0$
in probability as
$n\to \infty $
. Parts (a) and (c) follow from this and Lemmas 3.1 and 3.2.
Now for all
$n\in {\mathbb N}$
,
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
and
${\mathbb {W}}_n^{(\mathbf {M})}(g)$
are independent and equally distributed. Part (b) follows from this and Proposition 3.3.
We have the following immediate corollary.
Corollary 3.7. The following three properties are satisfied:
-
(a)
$\|{\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty \to 0$ converges in measure;
-
(b)
${\mathbb {W}}_n^{(\mathbf {M})}(h)$ converges in distribution to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$ Lévy motion;
-
(c)
$\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \to 0$ in measure.
Proof. Set

and write
$c_\beta :=((({\beta +1})/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha })$
.
For each
$\mathbf {D}\in \{\mathbf {S},\mathbf {M},\mathbf {L}\}$
, and all
$n\in {\mathbb N}$
,

Parts (a) and (c) follow from Lemma 3.6(a) and (c) since for all
$x,y\in {\mathbb R}$
,
$|\varphi (x,y)|\leq |x|+|y|$
.
Let
${\mathbb {W}},{\mathbb {W}}'$
be two independent
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
Lévy motions. It follows that
$\widetilde {{\mathbb {W}}}:=\varphi ({\mathbb {W}},{\mathbb {W}}')$
is a process with independent increments. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13], for all
$s<t$
,
$\widetilde {{\mathbb {W}}}(t)-\widetilde {{\mathbb {W}}}(s)$
is
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)(t-s)},1,0)$
distributed, whence
$\widetilde {{\mathbb {W}}}$
is an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
Lévy motion.
Since
$\varphi $
is continuous, Lemma 3.6 and the continuous mapping theorem imply that

converges in distribution to
$\widetilde {{\mathbb {W}}}$
and the proof is concluded.
4 Proof of Theorem 2.6
Let
$\alpha \geq 1$
. The strategy of the proof goes along similar lines. However, there is a major difference in the treatment of
${\mathbb {W}}_n^{(\mathbf {S})}$
as the
$L^1$
norm does not decay to
$0$
. For this reason we retort to a more sophisticated
$L^2$
estimate and make use of the fact that for all k,
$h_k$
is a
$T^{4^{k^2}}$
coboundary.
In what follows,
$1\leq \alpha <2$
is fixed,
$h_k$
and h are as in the statement of Theorem 2.6 and the decomposition of
${\mathbb {W}}_n(h)$
to a sum of
${\mathbb {W}}_n^{(\mathbf {S})}(h)$
,
$\mathbb{W}_n^{(\mathbf{M})}(h)$
and
$\mathbb{W}_n^{(\mathbf{L})}(h)$
is as before. We write
$d_k:=4^{k^2}$
.
Lemma 4.1. We have
$\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \neq 0)=0$
.
Proof. The statement follows from the inclusion

In a similar way to the proof of Lemma 3.1, we have

As before, we also have the following lemma.
Lemma 4.2. The random variable
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
converges to
$0$
in measure.
The proof of this lemma when
$1\leq \alpha <2$
is more difficult than the analogous Lemma 3.2. It is given in §4.2.
Proposition 4.3. The sequence of
$D[0,1]$
valued random variables
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
converges in distribution to
${\mathbb {W}}$
, an
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
Lévy motion.
Assuming the previous claims, we can complete the proof of Theorem 2.6.
Proof of Theorem 2.6
By Lemmas 4.1 and 4.2,
$\|{\mathbb {W}}_n^{(\mathbf {S})}+{\mathbb {W}}_n^{(\mathbf {L})}\|_\infty $
converges in probability to
$0$
. The claim now follows from Proposition 4.3 and Lemma 2.11.
In the next two subsections we prove Proposition 4.3 and Lemma 4.2.
4.1 Proof of Proposition 4.3
We introduce the following
$D[0,1]$
-valued processes on
$(\Omega ,\mathcal {F},{\mathbb P})$
:

The following is the analogue of Lemma 3.4 for the current case.
Lemma 4.4. The random variables
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
and
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
are equally distributed.
The proof of Lemma 4.4 is similar to the proof of Lemma 3.4, with obvious modifications. We leave it to the reader. Proposition 4.3 follows from Lemma 4.4 and the following result.
Lemma 4.5. The following two properties are satisfied.
-
(a) The sequence of random variables
$ \| {\mathbb {W}}_n^{(\mathbf {M})}(X)-({\mathbb {W}}_n^{(\mathbf {M})}(Z))\|_\infty$ converges to
$0$ in measure.
-
(b) The sequence of
$D[0,1]$ valued random variables
${\mathbb {W}}_n^{(\mathbf {M})}(X)$ converges in distribution to an
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$ Lévy motion.
Consequently,
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
converges in distribution to an
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
Lévy motion.
Proof of Lemma 4.5(b)
For all
$0\leq t\leq 1$
,

where for
$j\in {\mathbb N}$
,

We claim that for all but finitely many n,
$V_n(1),V_n(2),\ldots ,V_n(n)$
are i.i.d.
$S_\alpha S(A_n)$
random variables with
$\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
For all
$n\geq 2^{4\alpha }$
, if
$k\geq ({1}/{2\alpha })\log (n)$
, we have
$d_k\geq n$
. For all such n, the independence of
$V_n(1),\ldots ,V_n(n)$
follows from the independence of
$\{X_k(j):\ k\in {\mathbb N},1\leq j\leq 2\cdot d_k\}$
. We will now calculate its distributions.
For all
$1\leq j\leq n$
and
$k>({1}/{2\alpha })\log (n)$
,
$X_k(j)-X_k\big (j+d_k\big )$
is a difference of two independent
$S_\alpha (k^{-1/\alpha },1,0)$
random variables. By [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3], it is
$S_\alpha S(({2}/{k})^{1/\alpha })$
distributed. As
$V_n(j)$
is a sum of independent
$S_\alpha S$
random variables, we see that
$V_n(j)$
is
$S_\alpha S(A_n)$
distributed with

This concludes the claim on
$V_n(1),\ldots ,V_n(n)$
. The conclusion of the statement from here is similar to the end of the proof of Lemma 4.5(b).
Proof of Lemma 4.5(a)
We assume
$n>2^{4\alpha }$
so that for all
$k>({1}/{2\alpha })\log (n)$
,
$d_k>n$
.
Firstly, since for all
$k\in {\mathbb N}$
and
$j\leq 2d_k$
,

we have

Consequently,

We now look at
${\mathbb {W}}_n^{(\mathbf {M})}(X)-{\mathbb {W}}_n^{(\mathbf {M})}(Y)$
. For all
$0\leq t \leq 1$
,

where

and

Similarly to the proof of Lemma 3.1,

Now
$ \widehat {V}(j)$
,
$1\leq j \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all
$1\leq j\leq n$
,

It follows from Kolmogorov’s maximal inequality that for every
$\epsilon>0$
,

Here the first equality of the last line is true as
$\widehat {V}(1),\ldots ,\widehat {V}(n)$
are independent, zero-mean random variables with finite variance. This concludes the proof that

The claim now follows from (3) and the convergence in probability of
$\mathrm {II}_n,\mathrm {III}_n$
to the zero function.
4.2 Proof of Lemma 4.2
We first write

where

The reason for this further decomposition is that
$d_k>n$
if and only if
$k>\sqrt {\log (n)}$
so that only in the very small
$(\mathbf {VS})$
terms do we no longer have full independence in the summands. The proof that
${\mathbb {W}}_n^{(\mathbf {LS})}(h)$
tends to the zero function is quite similar to the proof of the last part in Lemma 4.5(a) while the proof of the other term makes use of the fact that we are dealing with coboundaries.
Lemma 4.6. The random variable
$\|{\mathbb {W}}_n^{(\mathbf {LS})}(h)\|_\infty$
converges in measure to
$0$
.
Proof. Write

We have that:
-
•
$\psi _n\circ T^j$ ,
$1\leq j\leq n$ , are independent (since
$d_k>n$ for all k in the range of summation), bounded and
$\int \psi _n \,dm=0$ ;
-
• for all t,
${\mathbb {W}}_n^{(\mathbf {LS})}(h)(t)=n^{-1/\alpha }S_{[nt]}(\psi _n)$ .
By Kolmogorov’s maximal inequality, for all
$\epsilon>0$
,

where the last equality follows from
$S_n(\psi _n)$
being a sum of zero mean, square integrable, independent random variables. We will now give an upper bound for
$\|\psi _n\|_2^2$
. Firstly,
$\{f_k-f_k\circ T^{d_k}:\ k>\sqrt {\log (n)}\}$
is distributed as
$\{Z_k(1)-Z_k(d_k+1):\ k>\sqrt {\log (n)}\}$
. Using in addition that for all
$k\in {\mathbb N}$
,

we observe that

Plugging this into the previous upper bound, we see that for all
$\epsilon>0$
,

proving the claim.
We now treat
${\mathbb {W}}_n^{(\mathbf {VS})}(h)$
. As before, we define

so that for all
$t\in [0,1]$
,
${\mathbb {W}}_n^{(\mathbf {VS})}(h)=S_{[nt]}\big (\varphi _n\big )$
. It is no longer guaranteed that
$\varphi _n,\ldots ,\varphi _n\circ T^n$
are independent. For this reason we can no longer bound the maximum using the Lévy inequality and we will make use of a more general maximal inequality. The first step involves bounding the square moments of random variables and we make repetitive use of the following most crude bound.
Claim 4.7. Let
$U_1,U_2,\ldots ,U_N$
be square integrable random variables. Then

Lemma 4.8. There exists a global constant
$C>0$
such that for all
$1\leq l<j\leq n$
and
$1\leq k\leq \sqrt {\log (n)}$
,

Proof. Let
$\mu _k:=\int f_k\,dm$
and write
$F_k:=f_k-\mu _k$
. For every
$j\leq n$
,

Consequently, for every
$1\leq l<j\leq n$
,

where

and

We will next show that there exists a constant C such that

The statement follows from this and Claim 4.7.
Recall that for all
$0\leq L\leq d_k$
and
$M\in {\mathbb N}$
,

is a sum of i.i.d. zero-mean square integrable random variables. We deduce that so long as
$L\leq d_k$
,

A similar argument as in the proof of Lemma 4.6 shows that there exists
$c>0$
so that

We conclude that there exists
$c_2>0$
such that for all
$L\leq d_k$
and
$M\in {\mathbb N}$
,

Noting that in the definition of A all terms on the right are of the form
$S_L(F_k)\circ T^M$
with
$L\leq d_k$
, we observe that

and thus

Now by Claim 4.7,

A similar argument to that for
$\|A\|_2^2$
shows that

and

This concludes the proof.
Corollary 4.9. For every
$\kappa>0$
, there exists
$C>0$
such that for all
$1\leq l<j\leq n$
,

Proof. By Claim 4.7,

Plugging in the bound of Lemma 4.8 on the right-hand side we see that there exists
$C>0$
such that

Since

the claim follows.
Lemma 4.10. The random variable
$\|{\mathbb {W}}_n^{(\mathbf {VS})}(h)\|_\infty$
converges in measure to
$0$
.
Proof. Let
$\epsilon>0$
. We have

Fix
$\kappa>0$
small enough so that
$\kappa +1<({2}/{\alpha })$
. By Corollary 4.9 and Markov’s inequality, for all
$1\leq l<j\leq n$
,

By [Reference Billingsley2, Theorem 10.2] with
$\beta =\frac {1}{2}$
and
$u_l:=\sqrt {C}n^{{\kappa }/{2}-{1}/{\alpha }}$
,

5 Skewed CLT for
$\alpha \in (1,2)$
Assume
$\alpha \in (1,2)$
and
$(f_k)_{k=1}^\infty $
are the functions from Corollary 2.3 where
$X_k(j)$
are
$S_\alpha (\kern -2pt\sqrt [\alpha ]{1/l},1,0)$
random variables and
$Z_k(j)$
is the corresponding discretization of the truncation
$Y_k(j)$
. Recall that
$D_k:=4^{\alpha k}$
,

$h_k:=f_k-\varphi _k$
and
$h=\sum _{k=1}^\infty h_k$
. The function h is well defined by Lemma 2.7.
We aim to show that

where

5.1 Proof of Theorem 2.8
The strategy of the proof starts with the decomposition,

where

Note that in deriving (7) we used that for all
$k\in {\mathbb N}$
,
$\int f_k\,dm=\int \varphi _kdm$
and that both
$\sum _{k=1}^N f_k$
and
$\sum _{k=1}^N\varphi _k$
converge in
$L^1(m)$
as
$N\to \infty $
.
The proof of Theorem 2.8 is by showing that when normalized, three of the four terms converge to
$0$
in probability and the remaining one converges in distribution to an
$S_\alpha (\ln (2),0,0)$
random variable.
Lemma 5.1. We have

The proof of Lemma 5.1 is similar to the proof of Lemma 3.1 and is thus omitted.
Lemma 5.2. The sequence of random variables
$n^{-1/\alpha }V_n(\varphi )$
converges to
$0$
in probability.
The proof of Lemma 5.2 begins with the following easy calculation.
Fact 5.3. If
$n\leq D_k$
then

If
$D_k\leq n$
then

Since
$f_k$
and
$Z_k(1)$
are equally distributed and
$Z_k(1)\leq Y_k(1)$
, the next claim follows easily from Proposition A.4.
Claim 5.4. For every
$k\in {\mathbb N}$
,

Using (8) and this claim we obtain the following lemma.
Lemma 5.5. We have
$\mathrm {Var}(V_n(\varphi ))\lesssim ({n^{2/\alpha }}/{\log (n)})$
.
Proof. For all
$k\geq ({1}/{2\alpha })\log (n)$
,
$D_k\geq n$
and (up to finitely many n)
$D_k\leq 4^{k^2}$
. Since
$\{f_k\circ T^j:0\leq j<2\cdot 4^{k^2}\}$
is equally distributed as
$\{Z_k(j):j\leq 1\leq j\leq 2\cdot 4^{k^2}\}$
, we deduce from (8) and the fact that the
$f_k$
are the functions from Corollary 2.3 that:
-
(a) for all
$k\geq ({1}/{2\alpha })\log (n)$ ,
$S_n(\varphi _k)$ is a sum of independent random variables;
-
(b)
$S_n(\varphi _k),\ k\geq ({1}/{2\alpha })\log (n)$ are independent.
By item (a),

Here the last inequality follows from Claim 5.4 and
${4^{(2-\alpha )k}}/{D_k}=4^{(2-2\alpha )k}$
.
Finally, by item (b),

Applying Markov’s inequality we obtain the following corollary.
Corollary 5.6. The sequence of random variables
${V_n(\varphi )}/{n^{1/\alpha }}\xrightarrow [n\to \infty ]{}0$
in probability.
We now show that
$ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
tends to
$0$
in probability. The first step is the following simple claim. Recall the notation
$F_k=f_k-\int f_k\,dm$
.
Claim 5.7. For every
$k\leq ({1}/{2\alpha })\log (n)$
,

Proof. As
$D_k\leq n$
and
$h_k=f_k-\varphi _k$
, it follows from (9) that

Lemma 5.8. The sequence of random variables
$ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
converges to
$0$
in probability.
Proof. By Claim 5.7,

where

As
$A_n$
is a sum of independent random variables,

Noting that for all
$k\in {\mathbb N}$
,
$\mathrm {Var}(F_k)=\mathrm {Var}(f_k)$
, we deduce from the last inequality and Claim 5.4 that

Next, as
$\int A_n\,dm=0$
, it follows from Chebyshev’s inequality that for every
$\epsilon>0$
,

This shows that
$n^{-1/\alpha }A_n$
tends to
$0$
in probability.
Since
$A_n$
and
$U^n(A_n)$
are equally distributed,
$n^{-1/\alpha }U^n(A_n)$
also tends to
$0$
in probability. The claim now follows from the converging together lemma.
Proposition 5.9. The sequence of random variables
$S_n^{(\mathbf {M})}(f)$
converges in distribution to an
$S_\alpha (\sigma ,1,0)$
random variable with
$\sigma ^\alpha =\ln 2$
.
We postpone the proof of this proposition to §5.2, but if we assume it here we can now prove Theorem 2.8.
Proof of Theorem 2.8
We deduce from Lemmas 5.8, 5.1 and 5.2 that

The result now follows from (7), Corollary 5.6, Proposition 5.9 and the converging together lemma.
5.2 Proof of Proposition 5.9
The proof of this proposition goes along similar lines to the proof of Proposition 3.3, with some (rather) obvious modifications. We first define

The following result is the analogue of Lemma 3.4 for the current case.
Lemma 5.10. The random variables
$S_n^{(\mathbf {M})}(f)$
and
$S_n^{(\mathbf {M})}(Z)+B_n$
are equally distributed.
The proof of Lemma 5.10 is similar to the proof of Lemma 3.4 with obvious modifications. We leave it to the reader. Proposition 5.9 follows from Lemma 5.10 and the following result.
Lemma 5.11.
-
(a) The random variables
$ {1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Z)-B_n)$ converge to
$0$ in measure.
-
(b) The random variables
${1}/{n^{1/\alpha }}S_n^{(\mathbf {M})}(X)$ converge in distribution to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$ random variable.
Consequently
$S_n^{(\mathbf {M})}(Z)+B_n$
converges in distribution to an
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
random variable
Proof of Lemma 5.11(b)
For all
$n\in \mathbb {N}$
,
$S_n^{(\mathbf {M})}(X)$
is a sum of independent totally skewed
$\alpha $
-stable random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.1],
$n^{-1/\alpha }S_n(X)$
is
$S_\alpha (\Sigma _n,1,0)$
distributed with

The result follows from the fact that if
$B_n$
is
$S_\alpha (\Sigma _n,1,0)$
distributed and
$\Sigma _n\to \kern -2pt\sqrt [\alpha ]{\ln (2)}$
then
$B_n$
converges to an
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
random variable.
Proof of Lemma 5.11(a)
We assume n is large enough so that if
$k>({1}/{2\alpha })\log (n)$
then
$n<4^{k^2}$
.
Firstly, since for all
$k\in {\mathbb N}$
and
$j\leq d_k$
,

we have

Consequently,

We now look at
$S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Y)-B_n$
. For all n,

where

and

Similarly to the proof of Lemma 3.1,

Now
$ \widehat {V}(m)$
,
$1\leq m \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all
$1\leq j\leq n$
,

It follows from Markov’s inequality that for every
$\epsilon>0$
,

This concludes the proof that
$\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
in probability.
The claim now follows from (11) and the convergence in probability of
$\mathrm {II}_n+\mathrm {III}_n$
to
$0$
.
5.3 Deducing Theorem 2.10 from Theorem 2.8
This is similar to the strategy and steps which were carried out in §3.2.
Recall the notation
$\hat {\varphi }_k:=(1/D_k)\sum _{j=0}^{D_k-1}g_k\circ T^j$
and
$\hat {h}_k:=g_k-\hat {\varphi }_k$
. Since for all
$k\in {\mathbb N}$
,
$\varphi _k$
and
$\hat {\varphi }_k$
are equally distributed, by mimicking the proof of Lemma 5.2 and Corollary 5.6 we obtain the following result.
Lemma 5.12. The random variables
$n^{-1/\alpha }V_n(\hat {\varphi })$
converge to
$0$
in probability.
Next we have the following analogue of Lemma 3.6.
Lemma 5.13.
-
(a) The random variables
${1}/{n^{1/\alpha }}(S_n^{(\mathbf {S})}(h),S_n^{(\mathbf {S})}\big (\hat {h}\big ))$ converge in probability to
$(0,0)$ .
-
(b) The random variables
${1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(f)+B_n,S_n^{(\mathbf {M})}(g)+B_n)$ converge in distribution to
$(W,W')$ where
$W,W'$ are independent
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$ random variables.
-
(c) The random variables
${1}/{n^{1/\alpha }}(S_n^{(\mathbf {L})}(f),S_n^{(\mathbf {L})}(g))$ converge in probability to
$(0,0)$ .
Proof. As for all k,
$h_k$
and
$\hat {h}_k$
are equally distributed, by mimicking the proof of Lemma 5.8 one proves that

Part (a) follows from this and Lemma 5.8.
The deduction of part (c) from Lemma 5.1 and its proof is similar.
Part (b) follows from Proposition 5.9 as
$S_n^{(\mathbf {M})}(f)$
and
$S_n^{(\mathbf {M})}(g)$
are independent and equally distributed.
Now fix
$\beta \in [-1,1]$
and recall that
$H=\Phi _{\beta }(h,\hat {h})$
where
$\Phi _\beta $
is the linear function defined for all
$x,y\in {\mathbb R}$
by

Proof of Theorem 2.10
Writing

we have for all
$n\in {\mathbb N}$
,

By Lemma 5.12 and parts (a) and (c) of Lemma 5.13,
$A_n\to (0,0)$
in probability. Since
$\Phi _\beta $
is continuous with
$\Phi _\beta (0,0)=0$
, it follows that
$\Phi _\beta \big (A_n\big )$
converges to
$0$
in probability as
$n\to \infty $
.
By Lemma 5.13(b) and the continuous mapping theorem,

where
$W,W'$
are independent
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
distributed random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13],
$\Phi _\beta (W,W')$
is
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
distributed.
The conclusion now follows from (13) and the converging together lemma.
Acknowledgements
The research of Z.K. was partially supported by ISF grant no. 1180/22.
A Appendix. Estimates on moments of truncated stable random variables
The following tail bound follows easily from [Reference Samorodnitsky and Taqqu11, Property 1.2.15]
Proposition A.1. There exists
$C>0$
such that if Y is
$S_\alpha (\sigma ,1,0)$
distributed with
$0<\sigma \leq 1$
and
$K>1$
then

In a similar way to the appendix in [Reference Kosloff and Volný9],the tail bound implies the following two estimates on moments of truncated
$S_\alpha (\sigma ,1,0)$
random variables.
Corollary A.2. For every
$r>\alpha $
, there exists
$C>0$
such that if Y is
$S_\alpha (\sigma ,1,0)$
distributed with
$0<\sigma \leq 1$
and
$K>1$
,

Proof. The bound follows from

Here the last inequality follows from Proposition A.1.
Corollary A.3. For every
$r<\alpha $
, there exists
$C>0$
such that if Y is
$S_\alpha (\sigma ,1,0)$
distributed with
$0<\sigma \leq 1$
as
$K\to \infty $
,

The proof of Corollary A.3 is similar to the proof of Corollary A.2. The following proposition is important in the proofs of Theorems 2.6 and 2.10.
Proposition A.4. For every
$K,\sigma>0$
, if X is
$S_\alpha (\sigma ,1,0)$
distributed then
$X1_{X<K}$
is square integrable. Furthermore, there exists
$C>0$
such that for every
$S_\alpha (\sigma ,1,0)$
random variable X with
$0<\sigma \leq 1$
and
$K>1$
,

Proof. Let Y be an
$S_\alpha (1,1,0)$
random variable and note that
$\sigma Y$
and X are equally distributed. By [Reference Zolotarev15, Theorems 2.5.3 and 2.5.4] (see also equations (1.2.11) and (1.2.12) in [Reference Samorodnitsky and Taqqu11]),
${\mathbb P}(Y<-\unicode{x3bb} )$
decays faster than any polynomial as
$\unicode{x3bb} \to \infty $
. This implies that
$Y1_{Y<0}$
has moments of all orders and

where
$D=\mathbb {E}(Y^21_{[Y<0]})$
. Now by this and Corollary A.2, we have

The claim follows from this upper bound.