Hostname: page-component-77f85d65b8-8v9h9 Total loading time: 0 Render date: 2026-04-17T13:36:54.428Z Has data issue: false hasContentIssue false

Weak convergence of the integral of semi-Markov processes

Published online by Cambridge University Press:  06 April 2026

Andrea Pedicone*
Affiliation:
Sapienza University of Rome
Fabrizio Cinque*
Affiliation:
Sapienza University of Rome
*
*Postal address: Department of Statistical Sciences, Piazzale Aldo Moro 5, 00185 Rome, Italy.
*Postal address: Department of Statistical Sciences, Piazzale Aldo Moro 5, 00185 Rome, Italy.
Rights & Permissions [Opens in a new window]

Abstract

We study the asymptotic properties, in the weak sense, of regenerative processes and Markov renewal processes. For the latter, we derive both renewal-type results, also concerning the related counting process, and ergodic-type results, including the so-called $\varphi$-mixing property. This theoretical framework permits us to study the weak limit of the integral of a semi-Markov process, which can be interpreted as the position of a particle moving with finite velocities, taken for a random time according to the Markov renewal process underlying the semi-Markov one. Under mild conditions, we obtain the weak convergence to scaled Brownian motion. As a particular case, this result establishes the weak convergence of the classical generalized telegraph process.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $V = (V(t))_{t\geq 0}$ be a càdlàg stochastic process on a state space $\mathcal{V}$ . Let us define $X = (X(t))_{t \geq 0}$ , the integral of V, as

(1.1) \begin{equation}X(t) = \int_0^tV(s) \,\mathrm{d}s, \quad t \geq 0.\end{equation}

The process (1.1), under different formulations of V, includes a wide class of stochastic motions, which appear in the probabilistic and physical literature under several different names, such as telegraph-type processes, finite-velocity random motions [Reference Beghin, Nieddu and Orsingher1, Reference Cinque and Cintoli7, Reference De Gregorio10, Reference Di Crescenzo11, Reference Orsingher28, Reference Stadje and Zack34], continuous time or directionally reinforced random walks [Reference Horvát and Shao18, Reference Mauldin, Monticino and von Weizsäcker24, Reference Meerschaert and Skarta25], and run-and-tumble processes [Reference De Bruyne, Majumdar and Schehr9, Reference Mertens, Angelani, Di Leonardo and Bocquet26]. The wide interest in telegraph-type processes also derives from their several fields of application. Indeed, these random motions are suitable for describing real movements occurring in physics, for instance, bacterial dynamics [Reference Mertens, Angelani, Di Leonardo and Bocquet26], and in ecology, where they are used to model the displacements of wild animals [Reference Holmes, Lewis, Banks and Veit17]. Some interesting applications also appear in finance, describing the stock prices or the volatility of financial markets (see [Reference Di Crescenzo and Pellerey12] and [Reference Kolesnik and Ratanov21]) using a more realistic alternative to the diffusion processes, which, as is well-known, and as we further investigate, are a limit case of telegraph-type processes. Our interest concerns the asymptotic behavior of X in the case where V is a semi-Markov process (see Definition 5). Under this probabilistic structure, we formalize a model for the motion of a particle moving with a semi-Markovian velocity process, meaning that it keeps a certain speed for a random time dependent on both the current velocity and the following one. Our formulation is broad enough to include all the cited processes as particular cases.

Let $(\hat{V},S)$ be the Markov renewal process related to V (see Definition 3); (1.1) can be expressed in the following equivalent form:

\begin{equation*}X(t) = \sum_{k=1}^{N(t)}\hat{V}_{k-1}(S_k-S_{k-1})+\hat{V}_{N(t)}(t-S_{N(t)}),\end{equation*}

where N is the counting process associated to S, i.e. $N(t) = \max\{k\in \mathbb{N}_0\,:\,S_k\leq t\}$ .

The main result of this paper is the weak convergence of a suitable normalization of $X(\lambda t)$ , with $\lambda>0$ , to a scaled Brownian motion in the space of the continuous functions $C[0,+\infty)$ endowed with the topology of the uniform metric. By defining the sequence of stochastic processes $X_\lambda = (X_\lambda(t))_{t\geq 0}$ as $X_\lambda(t) = \lambda^{-1/2}(X(\lambda t)-\lambda \theta t)$ , where $\theta = (\mathrm{E}_\pi[S_1])^{-1}\mathrm{E}_\pi[\hat{V}_0S_1]$ (with $\mathrm{E}_\pi$ standing for the expected value computed under the invariant measure $\pi$ of $\hat{V}$ ), we prove that

\begin{equation*}X_\lambda \Rightarrow \mu^{-1/2}\gamma W,\end{equation*}

where W is Brownian motion, $\mu = \mathrm{E}_\pi[S_1]$ , and

\begin{align*}\gamma^2 &= \mathrm{E}_\pi \Big[ (\hat{V}_0-\theta )^2S_1^2 \Big]+2\sum_{k\geq 1}\mathrm{E}_\pi [(\hat{V}_0-\theta)(\hat{V}_k-\theta)S_1 (S_{k+1}-S_k) ].\end{align*}

Moreover, we will also show that this latter formula can be expressed as

\begin{equation*}\gamma^2=\pi_{v_0}\mathrm{E}\left [\left (\sum_{k=1}^{\tau_1} (\hat{V}_{k-1}-\theta )S_1 \right )^{2}\Bigg|\hat{V}_0 = v_0\right],\end{equation*}

where $\tau_1$ is the first passage time over the state $v_0$ of the Markov chain $\hat{V}$ , that is, $\tau_1 = \inf\{k>1\,:\, \hat{V}_k = v_0\}$ .

Weak convergence for semi-Markov processes has also been studied in [Reference Glynn and Haas16] and [Reference Oprisan27], where a functional central limit theorem for semi-Markov processes is established by using martingale theory. Moreover, in [Reference Khorshidian19] the inhomogeneous case is considered, while in [Reference Logachov, Mogulskii, Prokopenko and Yambartsev22] a local limit theorem is derived for additive functionals of semi-Markov chains. Our approach relies on proving that the sequence $\{f(\hat{V}_{k-1},S_k-S_{k-1})\}_{k \geq 1}$ , where f is any measurable function, is $\varphi$ -mixing with $\varphi_k = K\rho^{k-1}$ , for some $K>0$ , $\rho \in [0,1)$ , and delayed regenerative with respect to the sequence of successive passage times through a fixed state of the Markov chain $\hat{V}$ . See [Reference Sigman and Wolff33] for a review of regenerative processes, and references therein for their applications.

The regenerative property permits us to derive a weak version of a renewal-type theorem concerning the related counting process, that is, for some $T>0$ ,

\begin{equation*}\lim_{n \to +\infty}\mathrm{P}\Biggl\{\sup\nolimits_{t \in [0,T]}\biggl|\frac{N(n t)}{n}-\frac{t\mathrm{E} [\tau_1 \big |\hat{V}_0 = v_0 ]}{\mathrm{E}\left [\sum_{k=1}^{\tau_1} \left (S_k-S_{k-1} \right ) \big |\hat{V}_0 = v_0 \right ]}\biggr| \geq \epsilon \Biggr\} = 0, \quad \epsilon >0.\end{equation*}

Then we prove an ergodic theorem for the sequence $\{f(\hat{V}_{k-1},S_k-S_{k-1})\}_{k \geq 1}$ :

\begin{equation*}\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf (\hat{V}_{k-1},S_k-S_{k-1} ) = \mathrm{E}_\pi [f (\hat{V}_0,S_1)],\quad \text{almost surely (a.s.)},\end{equation*}

where f is any measurable function such that $\mathrm{E}_\pi[|f(\hat{V}_0,S_1)|]<+\infty$ .

Our problem has been inspired by the theory of the telegraph process (see [Reference Orsingher28]), which arises from (1.1) by setting $V(t) = V(0)(-1)^{N(t)}$ , where $N = (N(t))_{t\geq 0}$ is a Poisson process of parameter $\lambda>0$ and V(0) is uniform over $\{-c,c\}$ , with $c>0$ , and independent of N. We point out that the main focus of the research in this area is in finding the explicit distributions of telegraph-type processes [Reference Beghin, Nieddu and Orsingher1, Reference Cinque5, Reference De Gregorio10, Reference Lopez and Ratanov23], their functionals, and conditioned processes [Reference Cinque6, Reference Cinque and Orsingher8, Reference Lopez and Ratanov23, Reference Orsingher28, Reference Pedicone and Orsingher31]. By considering the equation that governs the absolutely continuous component of the law of the telegraph process,

(1.2) \begin{equation}\frac{\partial^2 u}{\partial t^2}+2\lambda \frac{\partial u}{\partial t} = c^2\frac{\partial^2u}{\partial x^2},\end{equation}

which is known as the telegraph equation, we see that, for $\lambda,c\to+\infty$ such that $c^2/\lambda\to1$ (the so-called Kac limit conditions), (1.2) becomes the heat equation. Under this behavior, the particle speed goes to infinity, as does the number of inversions, maintaining a specific ratio between the two limits. This consideration is the heuristic explanation for the result proved in [Reference Ethier and Kurtz14, p. 471], where it is shown that the telegraph process converges weakly to Brownian motion in the space $C[0,+\infty)$ . We also refer to [Reference Lopez and Ratanov23, Reference Orsingher28] for the pointwise convergence, to [Reference Cinque5] for a conditional one, and to [Reference Pedicone and Orsingher31] for the convergence of the telegraph meander.

The weak convergence of telegraph-type processes has been studied by Ghosh et al. [Reference Ghosh, Rastegar and Roitershtein15] and Horvát and Shao [Reference Horvát and Shao18], who proved an invariant principle and related limit theorems in the case where $V(t) = \hat{V}_{k-1}\mathsf{1}_{[S_{k-1},S_k)}(t)$ , under the stronger hypothesis that $\{\hat{V}_k\}_{k\geq 0}$ is an irreducible stationary Markov chain with finite d-dimensional state space, independent of $\{S_k\}_{k\geq 0}$ , which is a renewal sequence.

This paper is organized in the following manner. Section 2 contains some preliminary limit results about sequences indexed by integer random variables and some asymptotic results concerning the regenerative processes. Section 3 provides a detailed study of the limit behavior of Markov renewal processes. Section 4 concerns the weak limit of the integral (1.1) of a semi-Markov process. Finally, in Section 5, we study the limit of the integral (1.1) of an alternating renewal process, a particular case of a semi-Markov process. As an application, we establish the weak convergence of the classical generalized telegraph process.

2. Limit theorems for regenerative processes

For a given stochastic process $\eta = (\eta(t))_{t\geq0}$ , suppose that there exists a random time $\tau$ , such that the process $\big(\eta(t+\tau);t\geq 0\big)$ has the same distribution as $\eta$ and is independent of the previous cycle $\big\{\big(\eta(t);0\leq t<\tau\big),\tau\big\}$ . Then we say that time $\tau$ is a regeneration epoch and $\eta$ regenerates at time $\tau$ . This means that the process restarts as if it were time $t = 0$ again, and its future is independent of its past. But if such a $\tau$ exists, then, since things start over again as if new, there must be a second time $\tau' > \tau$ yielding an identically distributed second cycle $\big\{(\eta(\tau+t);0\leq t<\tau'-\tau\big),\tau'-\tau\big\}$ , and so on. This probabilistic structure describes what is called a regenerative process. Next, we give the formal definition.

Definition 1. Astochastic process $\eta = (\eta(t))_{t\geq 0}$ , on a filtered probability space $(\Omega,\mathcal{G},$ $(\mathcal{F}_t)_{t\geq 0},\mathrm{P})$ , is said to be regenerative if there exists a sequence $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , of stopping times with respect to $(\mathcal{F}_t)_{t\geq 0}$ , provided that $\{\tau_{m+1} - \tau_{m},(\eta(t+\tau_{m});\;\;t \in [0,\tau_{m+1} - \tau_{m}))\}_{m\geq 0}$ forms a sequence of independent, identically distributed (i.i.d.) random elements.

The process $\eta$ is delayed regenerative if $\{\tau_{m+1} - \tau_{m},(\eta(t+\tau_{m});\quad t \in [0,\tau_{m+1} - \tau_{m}))\}_{m\geq 1}$ is a sequence of i.i.d. random elements independent of $(\tau_{1} ,\eta(t)_{t \in[0,\tau_{1})})$ .

According to [Reference Çinlar4, p. 298], another way to define a regenerative process is as follows.

Definition 2 A stochastic process $\eta = (\eta(t))_{t\geq 0}$ , on a filtered probability space $(\Omega,\mathcal{G},$ $(\mathcal{F}_t)_{t\geq 0},\mathrm{P})$ with state space E, is said to be regenerative provided that

  1. (i) there exists a sequence $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , of stopping times with respect to $(\mathcal{F}_t)_{t\geq 0}$ that forms a renewal sequence;

  2. (ii) for any $d,m \in \mathbb{N}$ , $0 \leq t_1 < \cdots < t_d$ and any measurable and bounded function $g\,:\,E^d \to \mathbb{R}$

    \begin{equation*}\mathrm{E}[g(\eta(t_1+\tau_m),\ldots,\eta(t_d+\tau_m))|\mathcal{F}_{\tau_m}] = \mathrm{E}[g(\eta(t_1),\ldots,\eta(t_d))].\end{equation*}

The process $\eta$ is delayed regenerative, if

  1. (ia) there exists a sequence $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , of stopping times with respect to $(\mathcal{F}_t)_{t\geq 0}$ that forms a delayed renewal sequence;

  2. (iia) for any $d,m \in \mathbb{N}$ , $0\leq t_1 < \cdots < t_d$ and any measurable and bounded function $g\;:\;E^d \to \mathbb{R}$

    \begin{equation*}\mathrm{E}[g(\eta(t_1+\tau_m),\ldots,\eta(t_d+\tau_m))|\mathcal{F}_{\tau_m}] = \mathrm{E}[g(\eta(t_1+\tau_1),\ldots,\eta(t_d+\tau_1))].\end{equation*}

Loosely speaking, a stochastic process is regenerative if there is a renewal process such that the segments of the process between successive renewal times are i.i.d., while a stochastic process is delayed regenerative if the first cycle has a distribution that is different from that of subsequent cycles.

The i.i.d. structure underlying a regenerative process allows us to establish classical limit theorems for a sequence of random variables. Before doing so, we state the following two lemmas (which we will frequently use in our work) about the convergence of randomly indexed sequences of random variables. As will be shown, a random sequence indexed by a random variable has the same limit as the corresponding sequence indexed by a deterministic sequence, provided that the random index is ‘asymptotically equivalent’ to the deterministic one. The first lemma shows an invariance principle in D[0,1] (the space of a real càdlàg function endowed with the Skorokhod topology) of a random walk with a random number of summands. The proof follows the argument of [Reference Billingsley3, Theorem 14.4] and will be omitted.

Lemma 1. Let $\{\eta_k\}_{k\geq 1}$ be a sequence of random variables with $\mathrm{E}[\eta_k] = 0$ , $\mathrm{E}[\eta_k^2] = \sigma^2 <+\infty$ for every $k \in \mathbb{N}$ . Define the processes

\begin{align*}X_n(t) & = \frac{1}{\sqrt{n}\sigma}\sum_{k=1}^{\lfloor nt\rfloor}\eta_k, \quad t \in [0,1],\\[3pt]Y_n(t) & = \frac{1}{\sqrt{a_n\theta} \sigma}\sum_{k=1}^{\nu_{nt}}\eta_k, \quad t \in [0,1],\end{align*}

where $\theta$ is a positive constant, $\{a_n\}$ is a positive divergent sequence and $(\nu_{t})_{t \geq 0}$ is a family of integer random variables. If it holds that

\begin{equation*}\sup\nolimits_{t \in [0,1]}\biggl|\frac{\nu_{nt}}{a_n}-\theta t\biggr| \Rightarrow 0,\end{equation*}

then $X_n \Rightarrow W$ on D[0,1], which implies $Y_n \Rightarrow W$ on D[0,1].

Lemma 2. Let $\{\eta_k\}_{k\geq 1}$ be a sequence of identically distributed random variables with $\mathrm{E}[|\eta_1|^p]<+\infty$ for some $p>0$ . Let $(\nu_t)_{t\geq 0}$ be a family of integer random variables and suppose that there exist a positive divergent family $(a_t)_{t\geq0}$ and $\theta>0$ such that

(2.1) \begin{equation}\frac{\nu_t}{a_t} \Rightarrow \theta.\end{equation}

Then

\begin{equation*}\max_{k=1,\ldots,\nu_t}\frac{\eta_k}{\sqrt[p]{a_t}} \Rightarrow 0.\end{equation*}

Proof. For every $\epsilon>0$ and $\delta>0$ , we have that

\begin{align*}\mathrm{P} \left \{\max_{k = 1,\ldots,\nu_t}|\eta_k|>\epsilon \sqrt[p]{a_t} \right \} &\leq \mathrm{P} \left \{\max_{k = 1,\ldots,\nu_t} \left |\eta_k \right |>\epsilon\sqrt[p]{a_t}, \big|\nu_{t}-\theta^{-1}a_t \big |<\delta a_t \right \} \\[3pt]& \quad +\mathrm{P} \big \{ \big|\nu_{t}-\theta^{-1}a_t \big|\geq\delta a_t \big \} \\[3pt]&\leq \mathrm{P} \left \{\max_{k = 1,\ldots,\lfloor a_t(\theta^{-1} +\delta) \rfloor}|\eta_k|>\epsilon\sqrt[p]{a_t} \right \}+\mathrm{P} \big \{ \big|\nu_{t}-\theta^{-1}a_t \big|\geq\delta a_t \big \}\\[3pt]&\leq\frac{\left \lfloor a_t(\theta^{-1} +\delta) \right \rfloor}{\epsilon^p a_t}\mathrm{E} \left [|\eta_1|^p\mathsf{1}_{\{|\eta_1|\geq \epsilon\sqrt[p]{a_t}\}} \right ]+\mathrm{P} \big \{ \big|\nu_{t}-\theta^{-1}a_t \big|\geq\delta a_t \big \}.\end{align*}

Now, by applying the dominate convergence theorem, the first term of the previous inequality goes to zero, while the second goes to zero by means of (2.1).

The next statements concern the asymptotic behavior of regenerative sequences, showing a functional central limit theorem and a strong law of large numbers.

Theorem 1. Let $\{\eta_k\}_{k\geq 1}$ be a delayed regenerative process with regeneration epochs $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , such that $\mathrm{E}[\tau_m^2] < +\infty$ for every $m \in \mathbb{N}$ . If we define the random function

\begin{equation*}X_n(t) = \frac{1}{\sqrt{n}\sigma}\sum_{k=1}^{\lfloor nt\rfloor}(\eta_k-\mu), \quad t \in [0,1], \ n \in \mathbb{N},\end{equation*}

where

(2.2) \begin{align}\mu = \frac{\mathrm{E} \left [\sum_{k=\tau_1+1}^{\tau_2}\eta_k \right ]}{\mathrm{E} \left [\tau_2-\tau_1 \right ]}, \quad \sigma^2 = \frac{\mathrm{E} \left [\left (\sum_{k=\tau_1+1}^{\tau_2}(\eta_k-\mu)\right )^2 \right ]}{\mathrm{E} \left [\tau_2-\tau_1 \right ]},\end{align}

and if we assume that, for every $m \in \mathbb{N}$ ,

(2.3) \begin{equation}\mathrm{E}\left [\left (\sum_{k=\tau_{m-1}+1}^{\tau_{m}}|\eta_k-\mu|\right )^2\right ] <+\infty,\end{equation}

then it holds that

\begin{equation*}X_n \Rightarrow W\end{equation*}

in the space D[0,1].

Proof. Let us define

\begin{equation*}R_m = \sum_{k=\tau_{m-1}+1}^{\tau_{m}}\frac{(\eta_k-\mu)}{\sigma}, \quad m \in \mathbb{N}.\end{equation*}

By the regenerative property, $\{R_m\}_{m \geq 1}$ defines a sequence of independent random variables that are identically distributed for $m\geq 2$ . We have that $\mathrm{E}[R^2_1] <+\infty$ , $\mathrm{E}[R_m] = 0$ and $\mathrm{E}[R_m^2] =\mathrm{E}[\tau_2-\tau_1]< +\infty$ , for every $m \geq 2$ , by (2.3). Let us also denote $\nu_n = \max\{m \in \mathbb{N}_0\,:\, \tau_m \leq n\}$ . Then the process

\begin{equation*}Y_n(t) = \frac{1}{\sqrt{n}}\sum_{m=2}^{\nu_{\lfloor n t\rfloor}}R_m, \quad t \in [0,1],\end{equation*}

is a random walk with i.i.d. increments and a random number of summands. It follows that the process $X_n(t)$ can be reformulated as

\begin{equation*}X_n(t) = \frac{R_1}{\sqrt{n}}+Y_n(t) + \frac{1}{\sqrt{n}\sigma}\sum_{k=\tau_{\nu_{\lfloor n t\rfloor}}+1}^{\lfloor n t\rfloor}(\eta_k -\mu).\end{equation*}

Now, the inter-arrival times $\zeta_m = \tau_m-\tau_{m-1}$ are independent for every $m \in \mathbb{N}$ and identically distributed for $m\geq 2$ , with $\mathrm{E}[\zeta^2_m] <+\infty$ for every $m \in \mathbb{N}$ , by assumption. Consider the random walk built on $\{\zeta_k\}_{k\geq1}$ :

\begin{equation*}Z_n(t) = \frac{1}{\sqrt{n\mathbb{V}[\tau_1]}} \left (\zeta_1-\mathrm{E} \left [\tau_1 \right ] \right )+\frac{1}{\sqrt{n\mathbb{V}[\tau_2-\tau_1]}}\sum_{k=2}^{\lfloor nt\rfloor} \left (\zeta_k-\mathrm{E} \left [\tau_2-\tau_1 \right ]\right ), \quad t \in [0,1].\end{equation*}

By applying the Borel–Cantelli lemma, the first term converges almost surely to zero, and hence, by Donsker’s theorem [Reference Billingsley3, Theorem 14.1], it holds that $Z_n \Rightarrow W$ in D[0,1]. From this, with Theorem 14.6 of [Reference Billingsley3] at hand, we have that the process $\bar{\nu}_n$ ,

\begin{equation*}\bar{\nu}_n(t) = \frac{\nu_{\lfloor n t\rfloor}-\mathrm{E}[\tau_2-\tau_1]^{-1}nt}{\sqrt{n\mathbb{V}[\tau_2-\tau_1]}\mathrm{E}[\tau_2-\tau_1]^{-\frac{3}{2}}}, \quad t \in [0,1],\end{equation*}

is such that $\bar{\nu}_n \Rightarrow W$ in D[0,1]. Moreover, by the continuous mapping theorem, we can obtain a uniform basic renewal theorem:

(2.4) \begin{equation}\sup\nolimits_{t \in [0,1]}\biggl|\frac{\nu_{\lfloor n t\rfloor}}{n}-\frac{t}{\mathrm{E}[\tau_2-\tau_1]}\biggr| \Rightarrow 0.\end{equation}

Hence, we use, once again, Donsker’s theorem to prove that

\begin{equation*}\hat{Y}_n(t) = \frac{1}{\sqrt{n}}\sum_{m=2}^{\lfloor n t\rfloor}R_m, \quad t \in [0,1],\end{equation*}

satisfies $\hat{Y}_n\Rightarrow \sqrt{\mathrm{E}[\tau_2-\tau_1]}\,W$ ; then, by applying Lemma 1, it follows that $Y_n \Rightarrow W$ in D[0,1].

It remains to prove that

\begin{equation*}\frac{R_1}{\sqrt{n}}+\sup\nolimits_{t \in [0,1]}\left|\frac{1}{\sqrt{n}\sigma}\sum_{k=\tau_{\nu_{\lfloor n t\rfloor}}+1}^{\lfloor n t\rfloor}(\eta_k -\mu)\right| \Rightarrow 0.\end{equation*}

As before, the first term converges almost surely to zero, by the Borel–Cantelli lemma; hence, we focus on the second one. We have the following inequalities:

\begin{align*}\sup\nolimits_{t \in [0,1]}\left|\frac{1}{\sqrt{n}\sigma}\sum_{k=\tau_{\nu_{\lfloor n t\rfloor}}+1}^{\lfloor n t\rfloor}(\eta_k -\mu)\right| &\leq \sup\nolimits_{t \in [0,1]}\left(\frac{1}{\sqrt{n}\sigma}\sum_{k=\tau_{\nu_{\lfloor n t\rfloor}}+1}^{\tau_{\nu_{\lfloor n t\rfloor+1}}}|\eta_k -\mu|\right) \nonumber\\&= \max_{m = 0,\ldots,\nu_n}\biggl(\frac{1}{\sqrt{n}\sigma}\sum_{k=\tau_{m}+1}^{\tau_{m+1}}|\eta_k -\mu|\biggr).\end{align*}

Assumption (2.3) and result (2.4) permit us to apply Lemma 2, from which we determine that the last term in the previous inequality goes to zero. The proof is now concluded.

Theorem 2. Let $\{\eta_k\}_{k\geq 1}$ be a delayed regenerative process with regeneration epochs $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , such that $\mathrm{E}[\tau_m] < +\infty$ for every $m \in \mathbb{N}$ . If, for every $m \in \mathbb{N}$ ,

\begin{equation*}\mathrm{E}\left[\sum_{k=\tau_{m-1}+1}^{\tau_m}|\eta_k|\right] <+\infty,\end{equation*}

then

\begin{equation*}\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^n\eta_k = \frac{\mathrm{E} \left [\sum_{k=\tau_1+1}^{\tau_2}\eta_k \right ]}{\mathrm{E} \left [\tau_2-\tau_1 \right ]}, \quad \text{a.s.}\end{equation*}

Proof. By assumption, $\{\nu_n\}_{n\geq 0}$ (as defined in the proof of Theorem 1) forms a delayed renewal process. Then the basic renewal theorem implies

(2.5) \begin{equation}\lim_{n\to+\infty}\frac{\nu_n}{n} = \frac{1}{\mathrm{E}[\tau_2-\tau_1]}, \quad \text{a.s.}\end{equation}

Now, the scaled partial sum of $\{\eta_k\}_{k\geq 1}$ can be rewritten as

(2.6) \begin{equation}\frac{1}{n}\sum_{k=1}^n\eta_k = \frac{1}{n}\sum_{k=1}^{\tau_{1}}\eta_k + \frac{1}{n}\sum_{m=2}^{\nu_n}\sum_{k=\tau_{m-1}+1}^{\tau_{m}}\eta_k+\frac{1}{n}\sum_{k=\tau_{\nu_n}+1}^n\eta_k.\end{equation}

By the regenerative property, $\{\sum_{k=\tau_{m-1}+1}^{\tau_{m}}\eta_k\}_{m\ge1}$ is a sequence of independent random variables that are identically distributed for $m\geq 2$ . By the inequality

\begin{equation*}\left |\frac{1}{n}\sum_{k=\tau_{\nu_n}+1}^n\eta_k\right| \leq \frac{1}{n}\sum_{k=\tau_{m-1}+1}^{\tau_{m}}|\eta_k|\end{equation*}

and the first Borel–Cantelli lemma, together with (2.5), we obtain that the first and third terms in (2.6) converge almost surely to zero. Finally, the strong law of large numbers with random index and (2.5) lead to the thesis.

3. Limit theorems for Markov renewal processes

In this section, we briefly recall the definition and the main properties of Markov renewal processes. These processes can be thought of as a model for the motion of a particle switching from one state to another with random sojourn times in between; the successive states visited form a Markov chain, and the distribution of the sojourn time depends on both the current state and the next state to be entered. Thus, a Markov renewal process generalizes a Markov process by allowing sojourn times that are not necessarily exponentially distributed and may also depend on the next state.

Definition 3. Let $(\Omega,\mathcal{G},\mathrm{P})$ be a probability space and $(\hat{V},S) = \{\hat{V}_k,S_k\}_{k\geq 0}$ be a stochastic process on it, where $\hat{V}_k$ takes values in a countable set $\mathcal{V}$ and $S_k$ are random variables, such that $S_{k+1} > S_{k}$ for every $k \in \mathbb{N}_0$ and $S_0 = 0$ . Denote by $\mathcal{F} = \{\mathcal{F}_k\}_{k \geq 0}$ the natural filtration associated to the process $(\hat{V},S)$ . Then $(\hat{V},S)$ is said to be a Markov renewal process with state space $\mathcal{V}$ , provided that, for $v \in \mathcal{V}$ , $t\geq 0$ , $k \in \mathbb{N}_0$ ,

(3.1) \begin{equation}\mathrm{P} \{\hat{V}_{k+1} = v,S_{k+1}-S_{k} \leq t |\mathcal{F}_{k} \} = \mathrm{P} \{\hat{V}_{k+1} = v,S_{k+1}-S_{k} \leq t|\hat{V}_{k} \}.\end{equation}

We always assume that the Markov renewal process $(\hat{V},S)$ is time-homogeneous. That is, for any $w,v \in \mathcal{V}$ , $t \geq 0$ ,

\begin{equation*}\mathrm{P} \{\hat{V}_{k+1} = w,S_{k+1}-S_{k} \leq t |\hat{V}_{k}=v \} = Q_{vw}(t).\end{equation*}

From the definition, it follows that $\{\hat{V}_n\}_{n\geq 0}$ is a Markov chain with state space $\mathcal{V}$ and transition matrix $P = \{p_{vw};\; v,w \in \mathcal{V}\}$ , where

\begin{equation*}p_{vw} = \lim_{t \to +\infty}Q_{vw}(t).\end{equation*}

The family of probabilities $Q = (Q_{wv}(t);\ w,v \in \mathcal{V}, t \geq 0)$ is called a semi-Markov kernel over $\mathcal{V}$ . Moreover, we define the following quantities:

\begin{align*}F_{wv}(t) & = \mathrm{P} \{S_{k+1}-S_{k} \leq t |\hat{V}_{k+1} = v,\hat{V}_{k} = w \} = \frac{Q_{wv}(t)}{p_{wv}},\\[3pt]Q_{w \bullet}(t) & = \mathrm{P} \{S_{k+1}-S_{k} \leq t |\hat{V}_{k} = w \} = \sum_{v \in \mathcal{V}}Q_{wv}(t),\end{align*}

and we use the notation $F_{wv}(\mathrm{d}t) = F_{wv}(t)\mathrm{d}t$ and $Q_{w\bullet}(\mathrm{d}t) = Q_{w\bullet}(t)\mathrm{d}t$ to express the associate measure.

Let $N = (N(t))_{t \geq 0}$ be the counting process associated with S, namely,

(3.2) \begin{equation}N(t) = \max\{k\in \mathbb{N}_0\,:\, S_k \leq t\}.\end{equation}

The inter-arrival times of N are denoted $\xi_k = S_{k}-S_{k-1}$ , $\xi_0 = 0$ . Then, by (3.1) it follows that, for any $n \in \mathbb{N}$ , the random variables (r.v.s) $\xi_1,\ldots,\xi_n$ are conditionally independent, given $\hat{V}_0,\ldots,\hat{V}_n$ , with the distribution of $\xi_k$ depending only on $\hat{V}_k$ and $\hat{V}_{k-1}$ for $k=1,\ldots,n$ . Indeed, for any $t_1,\ldots,t_n \in [0,+\infty)$ ,

(3.3) \begin{equation}\mathrm{P} \{\xi_1\leq t_1,\ldots,\xi_n\leq t_n |\hat{V}_0,\ldots,\hat{V}_n \} = \prod_{k=1}^n\mathrm{P} \{\xi_k\leq t_k |\hat{V}_k,\hat{V}_{k-1} \}.\end{equation}

The definition of the Markov renewal process implies that the sequence $\{\hat{V}_k,\xi_k\}_{k\geq 0}$ forms a bivariate Markov process with state space $E = \mathcal{V}\times [0,+\infty)$ , in which the future state depends on the past only through the present provided by $\hat{V}$ . As a consequence of the strong Markov property, it holds that, for any stopping time $\tau$ with respect to $\mathcal{F}$ ,

(3.4) \begin{equation}\mathrm{P} \{\hat{V}_{k+1+\tau} = v,S_{k+1+\tau}-S_{k+\tau} \leq t |\mathcal{F}_{\tau} \} = \mathrm{P} \{\hat{V}_{k+1+\tau} = v,S_{k+1+\tau}-S_{k+\tau} \leq t |\hat{V}_{\tau} \}.\end{equation}

We recall a property that will be used frequently in what follows. If $\hat{V}$ is an irreducible Markov chain with finite state space $\mathcal{V}$ , the first passage time to the state $v_0 \in \mathcal{V}$ ,

\begin{equation*}\tau_1 = \inf \{k\geq 1\,:\, \hat{V}_k = v_0 \}|,\end{equation*}

satisfies

(3.5) \begin{equation}\mathrm{P}\{\tau_1\geq n\}\leq k_1 \mathrm{e}^{-k_2n},\end{equation}

for every $n \in \mathbb{N}$ and for some positive constants $k_1,k_2$ .

We now study the properties of the sequence of random variables $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ obtained from the Markov renewal process $(\hat{V},S)$ . It is well known that an irreducible positive recurrent Markov process forms a regenerative process with respect to the sequence of successive passage times on a fixed state. As the next theorem shows, this property is inherited by the sequence $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ .

Theorem 3. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible. Fix a state $v_0 \in \mathcal{V}$ and define the sequence of random variables $\{\tau_m\}_{m \geq 0}$ , where $\tau_0 = 0$ :

(3.6) \begin{equation}\tau_m = \inf \{k>\tau_{m-1}\,:\, \hat{V}_k = v_0 \}.\end{equation}

Then the sequence of random variables $\{f(\hat{V}_{k-1},\xi_k)\}_{k \geq 1}$ , where $f\,:\,E \to \mathbb{R}$ is any $\mathcal{B}(E)$ -measurable function, is delayed regenerative with respect to the stopping times $\{\tau_m\}_{m\geq 0}$ associated to $\mathcal{F}$ . In particular, for any $d,m \in \mathbb{N}$ , $1 \leq k_1 < \cdots < k_d$ and any measurable and bounded function $g\,:\,E^d \to \mathbb{R}$ ,

(3.7) \begin{eqnarray}&&\mathrm{E} [g (\hat{V}_{k_1-1+\tau_m},\xi_{k_1+\tau_m},\ldots,\hat{V}_{k_d-1+\tau_m},\xi_{k_d+\tau_m} ) |\mathcal{F}_{\tau_m} ] \nonumber\\[5pt]&&\qquad= \mathrm{E} [g (\hat{V}_{k_1-1},\xi_{k_1},\ldots,\hat{V}_{k_d-1},\xi_{k_d} ) |\hat{V}_0 = v_0 ].\end{eqnarray}

Proof. The sequence of random variables $\{\tau_m\}_{m \geq 0}$ , by construction, forms a sequence of stopping time with respect to $\mathcal{F}$ , the filtration associated to $(\hat{V},S)$ . Moreover, these variables represent the times of successive visits of the Markov chain $\hat{V}$ to state $v_0$ and so they form a delayed renewal sequence (since $\hat{V}_0$ may be different from $v_0$ ). Therefore, we need only to show that condition (iia) of Definition 2 is satisfied. To this end, it is equivalent to prove that, for every $d,m \in \mathbb{N}$ , $1 \leq k_1 < \cdots < k_d$ ,

(3.8) \begin{eqnarray}&&\mathrm{E} [\mathsf{1}_A (\hat{V}_{k_1-1+\tau_m},\xi_{k_1+\tau_m},\ldots,\hat{V}_{k_d-1+\tau_{m}},\xi_{k_d+\tau_{m}} ) |\mathcal{F}_{\tau_m} ] \nonumber\\[5pt]&&\qquad=\mathrm{E} [\mathsf{1}_A (\hat{V}_{k_1-1+\tau_1},\xi_{k_1+\tau_1},\ldots,\hat{V}_{k_d-1+\tau_{1}},\xi_{k_d+\tau_{1}} ) ],\end{eqnarray}

for every set $A = \mathsf{X}_{r=1}^{d} H_r \times I_r$ , where $H_r \subset \mathcal{V}$ and $I_r \subset [0,+\infty)$ , $r=1,\ldots,d$ . By using (3.4) and the time homogeneity of the Markov renewal process, the first member of (3.8) becomes

(3.9) \begin{align}\mathrm{E} & \big [\mathsf{1}_{A} \big (\hat{V}_{k_1-1+\tau_m},\xi_{k_1+\tau_{m}}\ldots,\hat{V}_{k_d-1+\tau_{m}},\xi_{k_d+\tau_{m}} \big ) \big |\mathcal{F}_{\tau_m} \big ]\nonumber\\[3pt]&=\mathrm{E} \big [\mathsf{1}_{H_1} \big (\hat{V}_{k_1-1+\tau_m} \big ) \big |\hat{V}_{\tau_m} \big ]\!\!\!\sum_{v_1,\ldots,v_d \in \mathcal{V}}\prod_{p = 1}^{d-1}\mathrm{E} \big [\mathsf{1}_{\{v_p\}\times I_p} \big (\hat{V}_{k_p+\tau_m},\xi_{k_p+\tau_m} \big ) \big |\hat{V}_{k_{p}-1+\tau_m}\in H_{p} \big ]\nonumber\\[3pt]& \quad \times\mathrm{E} \big [\mathsf{1}_{H_{p+1}} \big (\hat{V}_{{k_{p+1}}-1+\tau_m} \big ) \big |\hat{V}_{k_{p}+\tau_m}=v_{p} \big ]\mathrm{E} \big [\mathsf{1}_{\{v_d\}\times I_d} \big (\hat{V}_{k_d+\tau_m},\xi_{k_d+\tau_m} \big ) \big |\hat{V}_{k_{d}-1+\tau_m}\in H_d \big ]\nonumber\\[3pt]&=\mathrm{E} \big [\mathsf{1}_{H_1} \big (\hat{V}_{k_1-1} \big ) \big |\hat{V}_0=v_0 \big]\!.\nonumber\\[3pt]& \quad \sum_{v_1,\ldots,v_d \in \mathcal{V}}\prod_{p = 1}^{d-1}\mathrm{E} \big [\mathsf{1}_{\{v_p\}\times I_p} \big (\hat{V}_{k_p},\xi_{k_p} \big ) \big |\hat{V}_{k_{p}-1}\in H_{p} \big ]\mathrm{E} \big [\mathsf{1}_{H_{p+1}} \big (\hat{V}_{{k_{p+1}}-1} \big ) \big |\hat{V}_{k_{p}}=v_{p} \big ]\nonumber\\[3pt]& \quad \times\mathrm{E} \big [\mathsf{1}_{\{v_d\}\times I_d} \big (\hat{V}_{k_d},\xi_{k_d} \big ) \big |\hat{V}_{k_{d}-1}\in H_d \big ]\nonumber\\[3pt]&=\mathrm{E}\Bigg [\mathsf{1}_{H_1} \big (\hat{V}_{k_1-1} \big )\prod_{p = 1}^{d-1}\mathsf{1}_{\mathcal{V}\times I_p} \big (\hat{V}_{k_p},\xi_{k_p} \big )\mathsf{1}_{H_{p+1}} \big (\hat{V}_{{k_{p+1}}-1} \big )\mathsf{1}_{\mathcal{V}\times I_d} \big (\hat{V}_{k_d},\xi_{k_d} \big )\bigg|\hat{V}_0=v_0\Bigg ]\nonumber\\[3pt]&=\mathrm{E} \big [\mathsf{1}_A \big (\hat{V}_{k_1-1},\xi_{k_1},\ldots,\hat{V}_{k_d-1},\xi_{k_d} \big ) \big |\hat{V}_0 =v_0 \big]. \end{align}

By looking at the second member of (3.8), from the fact that, for every $m \in \mathbb{N}$ , $\{\hat{V}_{\tau_m} = v_0\}$ is an almost sure event, by applying, once again, (3.4) and the time homogeneity of the Markov renewal process, it follows that

(3.10) \begin{align}\mathrm{E}& \big [\mathsf{1}_A(\hat{V}_{k_1-1+\tau_1},\xi_{k_1+\tau_1},\ldots,\hat{V}_{k_d-1+\tau_{1}},\xi_{k_d+\tau_{1}}) \big ]\nonumber\\[3pt]& \quad = \mathrm{E} \big [\mathsf{1}_A(\hat{V}_{k_1-1+\tau_1},\xi_{k_1+\tau_1},\ldots,\hat{V}_{k_d-1+\tau_{1}},\xi_{k_d+\tau_{1}}) \big |\hat{V}_{\tau_1} \big ]\nonumber\\[3pt]& \quad =\mathrm{E} \big [\mathsf{1}_A(\hat{V}_{k_1-1},\xi_{k_1+},\ldots,\hat{V}_{k_d-1},\xi_{k_d}) \big |\hat{V}_0=v_0 \big]\!.\end{align}

In conclusion, from (3.9) and (3.10), we obtain (3.8) and hence the thesis.

We observe that, from (3.7), it follows that, for any $d \in \mathbb{N}$ , $m \in \mathbb{N}_0$ , and for any bounded measurable function $g\,:\,E^{d}\to \mathbb{R}$ ,

(3.11) \begin{align}\mathrm{E} &\big [g \big (\hat{V}_{\tau_{m}},\xi_{\tau_{m}+1},\ldots,\hat{V}_{\tau_{m+1}-1},\xi_{\tau_{m+1}} \big )\mathsf{1}_{\{d\}} \big (\tau_{m+1}-\tau_{m} \big ) \big ] \nonumber\\[3pt]&\quad=\mathrm{E} \big [g \big (\hat{V}_{0},\xi_{1},\ldots,\hat{V}_{\tau_{1}-1},\xi_{\tau_{1}} \big )\mathsf{1}_{\{d\}} \big (\tau_{1} \big ) \big |\hat{V}_0 = v_0 \big]\!.\end{align}

If we assume that the finite Markov chain $\hat{V}$ is irreducible and aperiodic, then there exists a unique invariant measure $\pi= \{\pi_v; \;\; v \in \mathcal{V}\}$ , such that $\pi_{v} > 0$ for every $v \in \mathcal{V}$ . The next theorem shows that the n-steps probability with fixed length of the first sojourn times, $p^{(n)}_{t;wv}= \mathrm{P}\{\hat{V}_n = v|\hat{V}_0 = w, S_1 = t\}$ , converges to the invariant measure $\pi$ at an exponential rate.

Theorem 4. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and aperiodic. Then the following holds:

\begin{equation*}\big |p^{(n)}_{t;\;wv} - \pi_v \big| \leq c\rho^{n-1},\end{equation*}

where $c > 0$ and $\rho \in [0,1)$ .

Proof. The hypotheses of [Reference Billingsley2, Theorem 8.9] are satisfied; therefore, there exist $c\geq 0$ , $\rho \in [0,1)$ such that

(3.12) \begin{equation}\big |p^{(n)}_{wv} - \pi_v \big| \leq c\rho^{n},\end{equation}

where $p^{(n)}_{wv}$ is the probability of transition in n steps of the Markov chain $\hat{V}$ . Then we have

\begin{align*}\big |p^{(n)}_{t;wv}-\pi_v \big |& \leq \big |p^{(n)}_{wv}-\pi_v \big | + \frac{p^{(n)}_{wv}}{Q_{w\bullet}(\mathrm{d}t)} \big |\mathrm{P} \big \{S_1 \in \mathrm{d}t \big |\hat{V}_{0} = w,\hat{V}_{n} = v \big \}-Q_{w\bullet}(\mathrm{d}t) \big | \\[3pt]&= \big |p^{(n)}_{wv}-\pi_v \big | + \frac{1}{Q_{w\bullet}(\mathrm{d}t)} \left |\sum_{v' \in \mathcal{V}}p_{wv'}F_{wv'}(\mathrm{d}t) \left (p^{(n-1)}_{v'v}-p^{(n)}_{wv} \right ) \right |\leq3c\rho^{n-1},\end{align*}

where, in the last step, we made use of the triangular inequality and (3.12).

From Theorem 4, we have that, for every $A \subset [0,+\infty)$ , $v,w \in \mathcal{V}$ and $t \in [0,+\infty)$ ,

\begin{align*}\lim_{n\to +\infty}\mathrm{P} \{\hat{V}_{n-1} = v,\xi_n \in A |\hat{V}_0 = w,\xi_1 = t \} = \pi_v\int_A Q_{v\bullet}(\mathrm{d} x).\end{align*}

We now prove an ergodic theorem for the sequence $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ . Let $E^{\infty}$ be the product space of E. An element of $E^{\infty}$ can be considered as an infinite sequence:

\begin{equation*}(v,x) = \left \{ \left (Z_0(v),Z'_0(x) \right ), \left (Z_1(v),Z'_1(x) \right ),\ldots \right \}\!,\end{equation*}

where $Z_k\,:\,E^{\infty}\to \mathcal{V}$ and $Z'_k\,:\,E^{\infty}\to [0,+\infty)$ , $k \in \mathbb{N}_0$ , are the natural projection functions. A cylinder of rank n is a set of $E^{\infty}$ of the form

\begin{equation*}C_n = \left \{(v,x)\,:\, \left (Z_0(v),Z'_0(x) \right ) \in \{v_1\}\times I_1,\ldots, \left (Z_{n-1}(v),Z'_{n-1}(x) \right ) \in \{v_n\}\times I_n \right \},\end{equation*}

where $v_1,\ldots,v_n \in \mathcal{V}$ and $I_1,\ldots,I_n$ are intervals of $[0+\infty)$ . Let $\mathcal{C}$ be the $\sigma$ -field generated by the class of cylinders of all ranks. Throughout this paper, we use the notation $\mathrm{E}_{\pi}$ for the expected value computed when the distribution of $\hat{V}_0$ is the invariant one.

Theorem 5. (Ergodic theorem) Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and aperiodic with invariant distribution $\pi$ . Let $f\;:\;E\to \mathbb{R}$ be any $\mathcal{B}(E)$ -measurable function such that

\begin{equation*}\sum_{v \in \mathcal{V}}\int_{0}^{+\infty}|f(v,x)|\pi_v Q_{v\bullet}(\mathrm{d}x) < +\infty.\end{equation*}

Then,

\begin{equation*}\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf (\hat{V}_{k-1},S_k-S_{k-1} ) = \mathrm{E}_\pi [f(\hat{V}_0,S_1)], \quad \text{a.s.}\end{equation*}

Proof. Let $T\;:\;E^{\infty}\to E^{\infty}$ be the shift operator defined on the measurable space $(E^{\infty},\mathcal{C})$ . That is,

\begin{equation*}\left (Z_k(T(v,x)),Z'_k(T(v,x))\right ) = \left (Z_{k+1}(v),Z'_{k+1}(x)\right)\!,\end{equation*}

for every $k \in \mathbb{N}_0$ . Define $\zeta\;:\;\Omega \to E^{\infty}$ by

\begin{equation*}\left (Z_{k-1}(\zeta(\omega)),Z'_k(\zeta(\omega))\right ) = \left (\hat{V}_{k-1}(\omega),\xi_k(\omega)\right)\!, \quad k \in \mathbb{N}.\end{equation*}

If $\pi$ is chosen as the initial distribution, then the sequence $\{(\hat{V}_{k-1},\xi_k)\}_{k\geq 1}$ is stationary and hence the shift T preserves $\mathrm{P}\circ \zeta^{-1}$ , the distribution of $\{(\hat{V}_{k-1},\xi_k)\}_{k\geq 1}$ . Now let A and B be two cylinder sets of rank k and r, respectively: $A = \{(v,x)\;:\;(Z_0(v),Z'_1(x)) \in \{v_0\}\times I_1,\ldots,(Z_{k-1}(v),Z'_k(x)) \in \{v_{k-1}\}\times I_k\}$ and $B = \{(v,x)\;:\;(Z_0(v),Z'_1(x)) \in \{w_0\}\times J_1,\ldots, (Z_{r-1}(v),Z'_r(x)) \in \{w_{r-1}\}\times J_r\}$ , and set $I = I_1\times\ldots\times I_k$ , $J=J_1\times\ldots\times J_r$ . Then

\begin{eqnarray*}&&\mathrm{P}\circ \zeta^{-1} \{A\cap T^{-(k+n-1)}(B) \}\\[3pt]&&\qquad =\pi_{v_0}\int_I\int_J\prod_{i=1}^{k-1}Q_{v_{i-1}v_{i}}(\mathrm{d}t_i)Q_{v_{k-1}\bullet}(\mathrm{d}t_k)p^{(n)}_{t_k;v_{k-1}w_0}\prod_{j=1}^{r-1}Q_{w_{j-1}w_{j}}(\mathrm{d}s_j)Q_{w_{r-1}\bullet}(\mathrm{d}s_r).\end{eqnarray*}

By the result in Theorem 4, it follows that

\begin{equation*}\lim_{n\to+\infty}\mathrm{P}\circ \zeta^{-1} \{A\cap T^{-(k+n-1)}(B) \} = \mathrm{P}\circ \zeta^{-1}\{A\}\mathrm{P}\circ \zeta^{-1}\{B\},\end{equation*}

that is, the shift operator is mixing. Therefore, the shift operator is ergodic under $\mathrm{P}\circ\zeta^{-1}$ (see [Reference Walters35, Theorem 1.17]); hence, the thesis follows by applying the Birkhoff ergodic theorem.

Another asymptotic behavior exhibited by the sequence $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ , which can be deduced from the embedded Markov chain $\hat{V}$ , is the mixing property.

Definition 4. Let $\{\eta_k\}_{k \geq 0}$ be a stationary sequence of random variables and denote $\mathcal{F}_k = \sigma(\eta_0,\ldots,\eta_k)$ and $\mathcal{G}_{k+n} = \sigma(\eta_{k+n},\eta_{k+n+1},\ldots)$ . If there exists a number $\varphi_n$ , defined as

\begin{equation*}\varphi_n = \sup\{|\mathrm{P}\{B|A\}-\mathrm{P}\{B\}|\,:\, A \in \mathcal{F}_k, \mathrm{P}\{A\}>0, B \in \mathcal{G}_{k+n}\},\end{equation*}

such that $\lim_{n\to +\infty}\varphi_n = 0$ , then the sequence $\{\eta_k\}_{k \geq 0}$ is said to be $\varphi$ -mixing.

Theorem 6. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and aperiodic. Let $f\;:\;E \to \mathbb{R}$ be any $\mathcal{B}(E)$ -measurable function. Then the sequence of random variables $\{f(\hat{V}_{k-1},\xi_k)\}_{k \geq 1}$ is $\varphi$ -mixing with $\varphi_k = K\rho^{k-1}$ , where K is a positive constant and $\rho \in [0,1)$ .

Proof. By assumption, there exists a stationary distribution $\pi = \{\pi_v; \ v \in \mathcal{V}\}$ for the Markov chain $\hat{V}$ . If the initial distribution is the stationary one, then clearly the sequence $\{f(\hat{V}_{k-1},\xi_k)\}_{k \geq 1}$ is stationary. Set $A = \{\omega\,:\, (\hat{V}_0,\xi_1,\ldots,\hat{V}_{k-1},\xi_k)(\omega) \in \{v_0\}\times I_1 \times\ldots\times\{v_{k-1}\}\times I_k\}$ and $B = \{\omega\,:\, (\hat{V}_{k+n-1},\xi_{k+n},\ldots,\hat{V}_{k+n+r-1},\xi_{k+n+r})(\omega) \in \{w_0\}\times J_1 \times\ldots\times\{w_{r-1}\}\times J_r\}$ , for $v_i,w_j \in \mathcal{V}$ , and intervals $I_i,J_j \subset [0,+\infty)$ , $i=1,\ldots,k$ , $j = 1,\ldots,r$ . Then we have

\begin{multline*}|\mathrm{P}\{A \cap B\}-\mathrm{P}\{A\}\mathrm{P}\{B\}|\\ \leq\int_{I}\int_{J}\pi_{v_0}\prod_{i=1}^{k-1}Q_{v_{i-1}v_{i}}(\mathrm{d}t_i)Q_{v_{k-1}\bullet}(\mathrm{d}t_{k})\prod_{j=1}^{r-1}Q_{w_{j-1}w_{j}}(\mathrm{d}s_{j})Q_{w_{r-1} \bullet }(\mathrm{d}s_{r}) \big |p^{(n)}_{t_k;v_{k-1}w_0}-\pi_{w_0} \big |,\end{multline*}

where $I = I_1\times\ldots\times I_k$ , $J=J_1\times\ldots\times J_r$ . By applying Theorem 4, it follows that

\begin{align*}&|\mathrm{P}\{A \cap B\} -\mathrm{P}\{A\}\mathrm{P}\{B\}| \\[3pt]&\qquad\leq\int_{I}\int_{J}\pi_{v_0}\prod_{i=1}^{k-1}Q_{v_{i-1}v_{i}}(\mathrm{d}t_i)Q_{v_{k-1}\bullet}(\mathrm{d}t_{k})\prod_{j=1}^{r-1}Q_{w_{j-1}w_{j}}(\mathrm{d}s_{j})Q_{w_{r-1} \bullet }(\mathrm{d}s_{r})c\rho^{n-1}\\[3pt]&\qquad\leq c\#(\mathcal{V})\mathrm{P}\{A\}\rho^{n-1},\end{align*}

where $\#$ denotes the cardinality of a set. Thus, we can conclude that

(3.13) \begin{equation}|\mathrm{P}\{B|A\}-\mathrm{P}\{B\}| \leq K\rho^{n-1}.\end{equation}

with $K = c\#(\mathcal{V})$ . By a classical extension argument, the class of sets A and the class of sets B satisfying (3.13) can be extended, respectively, to $\sigma(\hat{V}_0,\xi_1,\ldots,\hat{V}_{k-1},\xi_{k})$ and $\sigma(\hat{V}_{k+n-1},\xi_{k+n},\ldots)$ , from which we have the thesis.

Theorem 3 implies that the sequence $\{\xi_k\}_{k\geq1}$ is delayed regenerative (it suffices to take the projection onto the second component). We use this property to prove a basic renewal-type theorem for N, the counting process associated to the Markov renewal process $(\hat{V},S)$ .

Theorem 7. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and that $\mathrm{E}[S^2_1|\hat{V}_0 = v,V_1 = w]<+\infty$ for every $v,w \in \mathcal{V}$ . For $v_0 \in \mathcal{V}$ , set

\begin{equation*}\mu = \frac{\mathrm{E} \left [\sum_{k=1}^{\tau_1}\xi_k \big |\hat{V}_0 = v_0 \right ]}{\mathrm{E} [\tau_1 \big |\hat{V}_0 = v_0 ]},\end{equation*}

where $\tau_1 = \inf\{k\geq 1\,:\, \hat{V}_k = v_0\}$ . Then, for every $T>0$ ,

\begin{equation*}\lim_{n \to +\infty}\mathrm{P}\left \{\sup\nolimits_{t \in [0,\,T]}\left |\frac{N(n t)}{n}-\frac{t}{\mu}\right | \geq \epsilon \right \} = 0, \quad \epsilon >0.\end{equation*}

Proof. By Theorem 3, the sequence of random variables $\{\xi_k\}_{k\geq 0}$ is delayed regenerative with respect to the sequence of stopping times $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , defined in (3.6). The irreducibility of the chain and the finiteness of the state space $\mathcal{V}$ imply, through (3.5), that $\mathrm{E}[\tau^2_m]<+\infty$ and $\mathrm{E}[\tau_m-\tau_{m-1}] = \mathrm{E}[\tau_1|\hat{V}_0 = v_0]$ for every $m \in \mathbb{N}$ . Furthermore, by means of (3.3) and (3.11), we can write, for every $m \in \mathbb{N}$ ,

(3.14) \begin{align}&\mathrm{E}\left[\left(\sum_{k=\tau_{m-1}+1}^{\tau_{m}}\xi_k\right)^2\right] = \mathrm{E}\left[\left(\sum_{k=1}^{\tau_{1}}\xi_k\right)^2\Bigg|\hat{V}_0 = v_0\right] \nonumber\\[3pt]&\quad= \sum_{n\geq 1}\sum_{\substack{v_1,\ldots,v_{n-1} \\ \in\mathcal{V}\setminus \{v_0\}}}\mathrm{P} \{\hat{V}_1=v_1,\ldots,\hat{V}_n = v_0 \big|\hat{V}_0 = v_0 \} \nonumber \\[3pt]&\qquad \mathrm{E}\left [\sum_{k=1}^{n}\xi_k^2+\sum_{k\neq k'}\xi_k\xi_{k'}\bigg|\hat{V}_0 = v_0,\ldots,\hat{V}_n = v_0\right ] \nonumber\\&\quad =\sum_{n\geq 1}\sum_{\substack{v_1,\ldots,v_{n-1} \\[3pt] \in\mathcal{V}\setminus \{v_0\}}}\mathrm{P} \{\hat{V}_1=v_1,\ldots,\hat{V}_n = v_0 |\hat{V}_0 = v_0 \}\Bigg\{\sum_{k=1}^{n}\mathrm{E} \left [\xi_1^2 \big |\hat{V}_1 = v_{k},\hat{V}_0 = v_{k-1} \right ]\nonumber\\[3pt] & \qquad +\sum_{k\neq k'}\mathrm{E} [\xi_1^2 |\hat{V}_1 = v_{k},\hat{V}_0 = v_{k-1} ]\mathrm{E} [\xi_1^2 |\hat{V}_1 = v_{k'},\hat{V}_0 = v_{k'-1} ]\Bigg\}\nonumber \end{align}
(3.14) \begin{align} &\leq\max_{v,v' \in \mathcal{V}}\mathrm{E} [\xi^2_1 |\hat{V}_1 = v',\hat{V}_0 = v ]\sum_{n\geq1}n\mathrm{P} \{\tau_1=n |\hat{V}_0 = v_0 \}\nonumber\\[3pt] & \quad +\max_{v,v' \in \mathcal{V}} (\mathrm{E} [\xi_1 |\hat{V}_1 = v',\hat{V}_0 = v ] )^2\sum_{n\geq1}n(n-1)\mathrm{P} [\tau_1=n |\hat{V}_0 = v_0 ]\nonumber\\[3pt] &\leq \max_{v,v' \in \mathcal{V}}\mathrm{E} [\xi^2_1 |\hat{V}_1 = v',\hat{V}_0 = v ]\mathrm{E} [\tau^2_1 |\hat{V}_0 = v_0 ],\end{align}

where we set $v_n = v_0$ in the notation. Thus, $\mathrm{E}[(\sum_{k=\tau_{m-1}+1}^{\tau_{m}}\xi_k)^2]<+\infty$ for every $m \in \mathbb{N}_0$ . Now we can apply Theorem 1 and it follows that the process $\bar{\xi}_n$ defined by

\begin{equation*}\bar{\xi}_n(t) = \frac{1}{\sqrt{n}\sigma}\sum_{k=1}^{\lfloor n t\rfloor}(\xi_k-\mu), \quad t \in [0,T],\end{equation*}

where $\sigma^2 = (\mathrm{E}[\tau_1|\hat{V}_0 = v_0])^{-1}\mathrm{E}[(\sum_{k=1}^{\tau_1}(\xi_k-\mu))^2|\hat{V}_0 = v_0]$ , satisfies $\bar{\xi}_n \Rightarrow W$ in D[0, T]. Now, by Billingsley [Reference Billingsley3, Theorem 14.6], the normalized counting process $\bar{N}_n$ ,

\begin{align*}\bar{N}_n(t) = \frac{N(n t)-\mu^{-1}n t}{\sigma\mu^{-\frac{3}{2}}\sqrt{n}}, \quad t \in [0,T],\end{align*}

satisfies $\bar{N}_n \Rightarrow W$ on D[0, T]. Finally, the continuous mapping theorem leads to

\begin{equation*}\sup\nolimits_{t \in [0,T]}\biggl|\frac{1}{\sqrt{n}}\frac{N(n t)-\mu^{-1}n t}{\sigma\mu^{-\frac{3}{2}}\sqrt{n}}\biggr| \Rightarrow 0;\end{equation*}

hence, the thesis.

Let us define the residual life of the process N as

\begin{equation*}R(t) = t-S_{N(t)}.\end{equation*}

The next theorem describes the asymptotic behavior of residual life R.

Theorem 8. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and that $\mathrm{E}[S^2_1|\hat{V}_0 = v,V_1 = w]<+\infty$ for every $v,w \in \mathcal{V}$ . Then

\begin{equation*}\lim_{n\to +\infty}\mathrm{P}\left \{\sup\nolimits_{t \in [0,T]}\frac{R(nt)}{\sqrt{n}}\geq \epsilon\right \} = 0, \quad \epsilon>0.\end{equation*}

Proof. By definition of N, we have the following inequality:

\begin{equation*}\sup\nolimits_{t \in [0,T]}\frac{n t-S_{N(n t)}}{\sqrt{n}} \leq \sup\nolimits_{t \in [0,T]}\frac{\xi_{N(n t)+1}}{\sqrt{n}} = \max_{k = 0,\ldots,N(n T)}\frac{\xi_{k+1}}{\sqrt{n}}.\end{equation*}

By assumption, there exists a stationary distribution for the Markov chain $\hat{V}$ such that the random variables forming the sequence $\{\xi_k\}_{k\geq 1}$ have the same distribution and finite second moment. Therefore, we can apply Lemma 2 and, by Theorem 7, we obtain the thesis.

In the next theorem, a Wald’s type identity related to the first moment of the sum $\sum_{k=1}^{\tau_1}f(\hat{V}_{k-1},S_k-S_{k-1})$ is provided, where $\tau_1 = \inf\{k\geq1\,:\, \hat{V}_k = v_0\}$ , for some $v_0 \in \mathcal{V}$ .

Theorem 9. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and aperiodic with invariant distribution $\pi$ . Let $f\;:\;E\to \mathbb{R}$ be any $\mathcal{B}(E)$ -measurable function such that, for every $v,w \in \mathcal{V}$ ,

(3.15) \begin{equation}\int_{0}^{+\infty}|f(v,x)|F_{vw}(\mathrm{d}x) < +\infty.\end{equation}

Then

\begin{equation*}\mathrm{E}\left[\sum_{k=1}^{\tau_1}f(\hat{V}_{k-1},S_k-S_{k-1})\bigg|\hat{V}_0 = v_0\right] = \mathrm{E} [\tau_1 |\hat{V}_0 = v_0 ]\mathrm{E}_\pi [f (\hat{V}_0,S_1)].\end{equation*}

Proof. From Theorem 3, the sequence $\{f(\hat{V}_{k-1},S_k-S_{k-1})\}_{k \geq 1}$ is delayed regenerative with respect to the sequence of stopping times $\{\tau_m\}_{m\geq 0}$ , $\tau_0 = 0$ , defined in (3.6). The irreducibility of the chain and the finiteness of the state space $\mathcal{V}$ implies that $\mathrm{E}[\tau_m]<+\infty$ and $\mathrm{E}[\tau_m-\tau_{m-1}] = \mathrm{E}[\tau_1|\hat{V}_0 = v_0]$ for every $m \in \mathbb{N}$ . From (3.11) we have, for every $m \in \mathbb{N}$ ,

\begin{align*}&\mathrm{E}\left[\sum_{k=\tau_{m-1}+1}^{\tau_m} \big |f \big(\hat{V}_{k-1},S_k-S_{k-1} \big) \big|\right] = \mathrm{E}\left[\sum_{k=1}^{\tau_1} \big|f \big(\hat{V}_{k-1},\xi_k \big ) \big |\big|\hat{V}_0 = v_0\right ] \\[3pt]&\quad=\sum_{n\geq 1}\sum_{\substack{v_1,\ldots,v_{n-1} \\ \in\mathcal{V}\setminus v_0}}\mathrm{P} \big \{\hat{V}_1=v_1,\ldots,\hat{V}_n = v_0 \big |\hat{V}_0 = v_0 \big \} \\[3pt]&\qquad \mathrm{E}\left [\sum_{k=1}^{n} \big |f(\hat{V}_{k-1},\xi_k) \big |\big|\hat{V}_0 = v_0,\ldots,\hat{V}_n = v_0\right ] \\[3pt] &\quad=\sum_{n\geq 1}\sum_{\substack{v_1,\ldots,v_{n-1} \\ \in\mathcal{V}\setminus v_0}}\mathrm{P} \{\hat{V}_1=v_1,\ldots,\hat{V}_n = v_0 |\hat{V}_0 = v_0 \} \\[3pt]&\qquad \sum_{k=1}^n\mathrm{E} [ |f (\hat{V}_{0},\xi_1 ) | |\hat{V}_0 = v_{k-1},\hat{V}_1 = v_k ] \\[4pt] &\quad\leq \mathrm{E} [\tau_1 |\hat{V}_0 = v_0 ]\max_{v,w \in \mathcal{V}} (\mathrm{E} [ |f (\hat{V}_{0},\xi_1 ) | |\hat{V}_1 = w,\hat{V}_0 = v ] ) < +\infty,\end{align*}

where we set $v_n = v_0$ and use (3.3). Hence, by Theorem 2,

\begin{equation*}\lim_{n\to+\infty}\frac{1}{n}\sum_{k=1}^nf (\hat{V}_{k-1},S_k-S_{k-1}) = \frac{\mathrm{E} \left [\sum_{k=1}^{\tau_1}f (\hat{V}_{k-1},S_k-S_{k-1}) \big |\hat{V}_0 = v_0 \right ]}{\mathrm{E} [\tau_1 |\hat{V}_0 = v_0 ]}, \quad \text{a.s.}\end{equation*}

Moreover, the sequence $\{f(\hat{V}_{k-1},S_k-S_{k-1})\}_{k\geq 1}$ is also stationary if $\hat{V}_0$ has a distribution given by $\pi$ . The finiteness of $\mathcal{V}$ and (3.15) imply that

\begin{equation*}\sum_{v \in \mathcal{V}}\int_{0}^{+\infty}|f(v,x)|\pi_v Q_{v\bullet}(\mathrm{d}x) < +\infty.\end{equation*}

Therefore, from Theorem 5, it follows that

\begin{equation*}\lim_{n\to+\infty}\frac{1}{n}\sum_{k=1}^nf (\hat{V}_{k-1},S_k-S_{k-1} ) = \mathrm{E}_\pi [f (\hat{V}_0,S_1 ) ], \quad \text{a.s.}\end{equation*}

from which we have the thesis.

We now consider the continuous time process associated to the Markov renewal sequence.

Definition 5. The stochastic process $V = (V(t))_{t \geq 0}$ defined by

\begin{equation*}V(t) =\begin{cases}\hat{V}_k &\text{if}\ S_k \leq t < S_{k+1},\\[2pt]\Delta &\text{if}\ t \geq \sup\nolimits_k S_k,\end{cases}\end{equation*}

where $\Delta$ is a point not in $\mathcal{V}$ , is known as the semi-Markov process associated with the Markov renewal process $(\hat{V},S)$ .

If we consider a finite state space $\mathcal{V}$ , it holds that $\sup\nolimits_k S_k = +\infty$ (see [Reference Çinlar4, p. 327]). Then the semi-Markov process V can be expressed as $V(t) = \hat{V}_{N(t)}$ , that is, more explicitly,

\begin{equation*}V(t) = \sum_{k\geq1}\hat{V}_{k-1}\mathsf{1}_{[S_{k-1},S_k)}(t) = \sum_{k\geq1}\hat{V}_{k-1}\mathsf{1}_{\{k-1\}}(N(t)).\end{equation*}

In the next, we will use the following theorem (see [Reference Çinlar4, Theorem 5.22]), which describes the asymptotic behavior of the law of a semi-Markov process V.

Theorem 10. Let V be the semi-Markov process associated with the Markov renewal process $(\hat{V},S)$ . If the embedded Markov chain $\{\hat{V}_k\}_{k\geq 0}$ is irreducible positive recurrent and aperiodic, then, for any $v,w \in \mathcal{V}$ ,

\begin{equation*}\lim_{t \to+\infty}\mathrm{P}\{V(t) = v|V(0) = w\} = \frac{\pi_v \mathrm{E} [S_1 |\hat{V}_0 = v ]}{\sum_{w\in \mathcal{V}} \pi_w \mathrm{E} [S_1 |\hat{V}_0 = w ]},\end{equation*}

where $\pi$ is the stationary measure of $\{\hat{V}_k\}_{k\geq 0}$ .

Under the hypothesis of Theorem 10, if we also assume that the state space $\mathcal{V}$ is finite, then it follows that

(3.16) \begin{equation}\lim_{t \to +\infty}\mathrm{E}[V(t)] = \frac{\mathrm{E}_\pi [\hat{V}_0S_1 ]}{\mathrm{E}_\pi[S_1]}.\end{equation}

4. Weak convergence of the integral of semi-Markov processes

Let $(\Omega,\mathcal{G},\mathrm{P})$ be a probability space, in which is defined $(\hat{V},S)\,:\,\Omega\to E^{\infty}$ , a homogeneous Markov renewal process with semi-Markov kernel Q, where $E^{\infty}$ is the product space of $E=\mathcal{V}\times[0,+\infty)$ , with $\mathcal{V}$ the finite state space of $\hat{V}$ . We endow E with the $\sigma$ -field generated by the cylinder sets $\mathcal{C}$ . We assume that the finite Markov chain $\hat{V}$ is irreducible and aperiodic with unique invariant measure $\pi$ . Let $\xi\,:\,\Omega \to [0,+\infty)^{\infty}$ , defined as $\xi_k = S_k -S_{k-1}$ with $k \in \mathbb{N}_0$ , $\xi_0 = 0$ . Let $N\,:\,\Omega\to D[0,+\infty)$ be the counting process associated to S, $N(t) = \max\{k \in \mathbb{N}_0\,:\,S_k \leq t\}$ , $t\geq 0$ , where $D[0,+\infty)$ is the space of càdlàg functions defined on $[0,+\infty)$ endowed by the Skorokhod topology and $V\,:\,\Omega\to D[0,+\infty)$ is the semi-Markov process associated to $(\hat{V},S)$ , i.e. $V(t) = \hat{V}_{N(t)}$ . Let $X\,:\,\Omega\to C[0,+\infty)$ be the integral of V, where $C[0,+\infty)$ is the space of continuous functions on $[0,+\infty)$ endowed with the topology of the uniform metric,

\begin{equation*}X(t) = \int_0^tV(s)\,\mathrm{d}s, \quad t \geq 0.\end{equation*}

The stochastic process X can be thought of as the position of a particle moving with velocity V. This means that the motion performs displacements of velocity given by $\hat{V}$ for a random amount of time of length determined by $\xi$ that depends on both the current velocity and the following one. Hence, each element of the sequence of random variables $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ contains the information about the velocity of the kth displacement and the duration of the time interval in which the motion moves according to this velocity.

By observing that the time interval [0,t] can be partitioned with respect to the sequence of the arrival times S, we obtain the following equivalent representation of X:

\begin{equation*}X(t)=\sum_{k=1}^{N(t)}\hat{V}_{k-1}(S_{k} -S_{k-1})+\hat{V}_{N(t)}( t- S_{N(t)}), \quad t \geq 0.\end{equation*}

The aim of this section is to identify conditions under which X admits a weak limit. It turns out that the suitable reparameterization of V is as follows. By introducing a parameter $\lambda>0$ , we define the scaled counting process $N_\lambda = (N_\lambda (t))_{t\geq 0}$ as

(4.1) \begin{equation}N_\lambda(t) = \max\{k \in \mathbb{N}_0\,:\, \lambda^{-1}S_k \leq t\} = N(\lambda t).\end{equation}

Then, we normalize the semi-Markov process $V(\lambda t)$ associated to $N(\lambda t)$ by defining the new process $V_\lambda = (V_{\lambda}(t))_{t\geq 0}$ as

(4.2) \begin{equation}V_{\lambda}(t) = \sqrt{\lambda}(V(\lambda t)-\theta), \quad t \geq 0,\end{equation}

where, according to (3.16), the parameter $\theta$ is the limit as $\lambda\to+\infty$ of the mean of $V(\lambda t)$ :

\begin{equation*}\theta =\lim_{\lambda \to +\infty}\mathrm{E} [V(\lambda t) ]=\frac{\mathrm{E}_\pi [\hat{V}_0S_{1} ]}{\mathrm{E}_\pi[S_1]}.\end{equation*}

The corresponding integral of the normalized semi-Markov process $V_\lambda$ is denoted $X_\lambda$ and takes the form

\begin{equation*}X_\lambda(t) = \frac{1}{\sqrt{\lambda}} \int_0^{\lambda t} (V(s)-\theta)\,\mathrm{d}s = \lambda^{-1/2}(X(\lambda t)-\theta\lambda t), \quad t \geq 0,\end{equation*}

which can also be expressed equivalently

(4.3) \begin{equation}X_\lambda(t) = \frac{1}{\sqrt{\lambda}}\sum_{k=1}^{N(\lambda t)} (\hat{V}_{k-1}-\theta )(S_{k} -S_{k-1})+\frac{1}{\sqrt{\lambda}} (\hat{V}_{N(\lambda t)}-\theta ) (\lambda t- S_{N(\lambda t)} ), \quad t \geq 0.\end{equation}

The underlying idea of the normalization introduced in (4.2) is to ensure that a sum of random variables having null mean (assuming that $\hat{V}_{0}$ has distribution $\pi$ ) appears in (4.3), with the classical central limit type scaling. From a physical point of view, this reparameterization guarantees that the number of changes of direction grows at the order of the square of the velocity. Indeed, as (4.1) shows, the number of total changes of velocity in a unit time interval increases as $\lambda$ . Conversely, (4.2) provides that the normalized velocity grows as $\sqrt{\lambda}$ . This is the standard equilibrium condition, allowing the convergence of the telegraph equation to the heat equation. Heuristically, this condition has the same meaning as the one that arises in the derivation of Brownian motion as limit of a random walk. Indeed, to obtain Brownian motion, a particle undergoing a random walk must perform a number of displacements that decay at the same order as the square of their lengths (see [Reference Zauderer36, Sections 1.1 and 1.2]).

Having introduced the proper normalization, our task is to show the weak convergence of $X_\lambda$ as $\lambda \to +\infty$ to a scaled Brownian motion in the space $C[0,+\infty)$ .

In (4.3), we can see that $X_\lambda$ can be decomposed into the sum of a random walk with a random number of summands and an interpolating term, which is proportional to the residual life of the counting process $N_\lambda$ . The increase in the number of changes in direction results in arrival times occurring more frequently, which informally means that every instant becomes an arrival time. This behavior is formally proved by Theorem 8, which shows that

\begin{equation*}\sqrt{\lambda}\bigg(\frac{S_{N(\lambda t)}}{\lambda}-t\bigg) = \frac{S_{N(\lambda t)} -\lambda t}{\sqrt{\lambda}} \Rightarrow 0.\end{equation*}

In words, the difference between the current time t and the instant of the normalized last arrival time $\lambda^{-1}S_{N(\lambda t)}$ approaches zero faster than $\lambda^{-1/2}$ . This result allows us to study the asymptotic law of $X_\lambda(t)$ through the law of $X_\lambda(\lambda^{-1}S_{N(\lambda t)})$ , which is a random walk where a random number of steps is enumerated by the counting process $N(\lambda t)$ . Hence, we define the random function on $D[0,+\infty)$ , $Y_\lambda = (Y_\lambda(t))_{t\geq 0}$ :

\begin{equation*}Y_\lambda(t) =X_\lambda \big(\lambda^{-1}S_{N(\lambda t)} \big) =\frac{1}{\sqrt{\lambda}}\sum_{k=1}^{N(\lambda t)}(\hat{V}_{k-1}-\theta)(S_{k} -S_{k-1}), \quad t \geq 0.\end{equation*}

From this consideration, we readily obtain the following theorem.

Theorem 11. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and that $\mathrm{E}[S^2_1|\hat{V}_0 = v,V_1 = w]<+\infty$ for every $v,w \in \mathcal{V}$ . Then

\begin{equation*}\lim_{\lambda \to +\infty}\mathrm{P} \left \{\sup\nolimits_{t \in [0,T]} \big |X_\lambda(t)-Y_\lambda(t)| \geq \epsilon \right \} = 0, \quad \epsilon,T >0.\end{equation*}

In light of the interpretation of the random variables $(\hat{V}_{k-1},\xi_k)$ , the product $\eta_k = (\hat{V}_{k-1}-\theta)\xi_k$ , $k \in \mathbb{N}$ , describes the length of the normalized kth displacement. The random variables $\eta_k$ have null mean and variance given by

\begin{align*}&\mathrm{E}_\pi\big[\eta^2_k\big] = \sum_{v\in \mathcal{V}}\int_0^{+\infty}(v-\theta)^2t^2\pi_vQ_{v\bullet}(\mathrm{d}t),\end{align*}

while the autocovariance is

\begin{equation*}\mathrm{E}_\pi[\eta_1\eta_{1+k}] = \int_{0}^{+\infty}\int_{0}^{+\infty}\sum_{v\in \mathcal{V}}\sum_{w\in \mathcal{V}}(v-\theta)(w-\theta)st\pi_{v}Q_{v\bullet}(\mathrm{d}s)p^{(k)}_{s;vw}Q_{w\bullet}(\mathrm{d}t), \quad k \in \mathbb{N}.\end{equation*}

From Theorem 4 and the fact that $\mathrm{E}_{\pi}[\eta_k] = 0$ , we have the following estimate of the covariances of $\eta_k$ :

(4.4) \begin{align}\left |\mathrm{E}_\pi \left [\eta_1\eta_{1+k} \right ] \right | &= \left|\int_{0}^{+\infty}\int_{0}^{+\infty}\sum_{v\in \mathcal{V}}\sum_{w\in \mathcal{V}}(v-\theta)(w-\theta)st\pi_{v}Q_{v\bullet}(\mathrm{d}s)Q_{w\bullet}(\mathrm{d}t) \big(p^{(k)}_{s;vw}-\pi_w \big )\right | \nonumber\\[3pt]&\leq \frac{K\rho^{k-1}}{\min_{v'\in \mathcal{V}}\pi_{v'}}\int_{0}^{+\infty}\int_{0}^{+\infty}\sum_{v\in \mathcal{V}}\sum_{w\in \mathcal{V}} \left |(v-\theta)(w-\theta) \right |st\pi_{v}\pi_wQ_{v\bullet}(\mathrm{d}s)Q_{w\bullet}(\mathrm{d}t)\nonumber\\[3pt]&= \frac{K\rho^{k-1}}{\min_{v'\in \mathcal{V}}\pi_{v'}}\big(\mathrm{E}_\pi \big[ \big|\hat{V}_0-\theta \big|\cdot\xi_1 \big]\big)^2, \quad k \in \mathbb{N},\end{align}

for some positive constant K and $\rho \in (0,1)$ .

In Lemma 1, we have seen that, if the number of summands asymptotically behaves as a deterministic sequence, a random walk with a random number of summands converges weakly to the same limit of the same random walk with a deterministic number of terms. Hence, we define the random function $\hat{Y}_\lambda = (\hat{Y}_\lambda(t), t \in [0,T])$ on D[0, T], for some $T>0$ :

(4.5) \begin{equation}\hat{Y}_\lambda(t) = \frac{1}{\sqrt{\lambda}}\sum_{k=1}^{\lfloor \lambda t \rfloor}\eta_k, \quad t \in [0,T].\end{equation}

The stationarity of $\{\eta_k\}_{k\geq 1}$ implies that the variance of $Y_\lambda$ becomes

(4.6) \begin{align}\mathrm{E}_\pi\big[\hat{Y}^2_\lambda(t) \big] &= \frac{\lfloor \lambda t \rfloor}{\lambda}\mathrm{E}_\pi \big[ (\hat{V}_0-\theta )^2\xi_1^2 \big ]\nonumber\\[10pt]&\quad +2\frac{\lfloor \lambda t \rfloor}{\lambda}\sum_{k=1}^{\lfloor \lambda t \rfloor-1}\biggl(1-\frac{k}{\lfloor \lambda t \rfloor}\biggr)\mathrm{E}_\pi[(\hat{V}_0-\theta )\xi_1 (\hat{V}_k-\theta )\xi_{1+k}].\end{align}

As a consequence of (4.4), the Cesaro sum contained in (4.6) converges and then

\begin{equation*}\lim_{\lambda \to +\infty}\mathrm{E}_\pi \big[\hat{Y}^2_\lambda(t) \big] = t\gamma^2,\end{equation*}

where

(4.7) \begin{align}\gamma^2& = \sum_{v\in \mathcal{V}}\int_0^{+\infty}(v-\theta)^2s^2\pi_vQ_{v\bullet}(\mathrm{d}s)\nonumber\\[5pt]& \quad +2\sum_{k\geq 1}\sum_{v\in \mathcal{V}}\sum_{w\in \mathcal{V}}\int_{0}^{+\infty}\int_{0}^{+\infty}(v-\theta)(w-\theta)st\pi_{v}Q_{v\bullet}(\mathrm{d}s)Q_{w\bullet}(\mathrm{d}t)p^{(k)}_{s;vw}.\end{align}

In the next theorem, we show the weak convergence of $\hat{Y}_\lambda$ to a scaled Brownian motion and we derive an alternative expression of the limiting variance.

Theorem 12. Let $(\hat{V},S)$ be a Markov renewal process on a finite state space $\mathcal{V}$ and assume that the Markov chain $\hat{V}$ is irreducible and aperiodic with invariant distribution $\pi$ and that $\mathrm{E}_\pi[S^2_1]<+\infty$ . Then the process $\hat{Y}_\lambda$ defined in (4.5) satisfies $\hat{Y}_\lambda \Rightarrow \gamma W$ in D[0,T], where $\gamma$ is expressed in (4.7). Moreover, for every $v_0 \in \mathcal{V}$ , it holds that

(4.8) \begin{equation}\gamma^2 =\pi_{v_0}\mathrm{E}\bigg[\bigg (\sum_{k=1}^{\tau_1}\eta_k\bigg )^{2}\bigg|\hat{V}_0 = v_0\bigg],\end{equation}

where $\tau_1 = \inf\{k\geq1\,:\, \hat{V}_k = v_0\}$ .

Proof. By Theorem 6, the stationary sequence of random variables $\{\eta_k\}_{k\geq 1}$ is $\varphi$ -mixing with parameter $\varphi_n = K\rho^{n-1}$ . Therefore, we can apply Theorem 19.2 of [Reference Billingsley3] to conclude that $\hat{Y}_\lambda \Rightarrow \gamma W$ in the space D[0, T], where $\gamma^2 = \mathrm{E}_\pi[\eta^2_1]+2\sum_{k\geq1}\mathrm{E}_\pi[\eta_1\eta_{k+1}]$ , which coincides with (4.7).

To prove the second statement of the theorem, we use the regenerative property of $\{\hat{V}_{k-1},\xi_k\}_{k\geq 1}$ . Fix a velocity $v_0 \in \mathcal{V}$ ; by Theorem 3, we have that $\{\eta_k\}_{k\geq 1}$ forms a delayed regenerative process with respect to the sequence $\{\tau_m\}_{m\geq 0}$ defined in (3.6). The irreducibility of the chain and the finiteness of the state space $\mathcal{V}$ implies, through (3.5), that $\mathrm{E}[\tau^2_m]<+\infty$ and $\mathrm{E}[\tau_m-\tau_{m-1}] = \mathrm{E}[\tau_1|\hat{V}_0 = v_0] = \pi_{v_0}^{-1}$ for every $m \in \mathbb{N}$ . Moreover, from Theorem 9, it holds that

\begin{equation*}\mathrm{E}\bigg[\sum_{k=1}^{\tau_1}\eta_k\bigg|\hat{V}_0 = v_0\bigg] = \pi_{v_0}^{-1}\mathrm{E}_\pi[\eta_1]=0.\end{equation*}

Now, (3.11) and (3.14) entail that, for every $m \in \mathbb{N}$ ,

\begin{align*}\mathrm{E}\bigg[\bigg(\sum_{k=\tau_{m-1}+1}^{\tau_{m}}|\eta_k|\bigg)^2\bigg]& = \mathrm{E}\bigg[\bigg(\sum_{k=1}^{\tau_{1}}|\eta_k|\bigg )^2\bigg|\hat{V}_0 = v_0\bigg]\\[3pt]&\leq \max_{v \in \mathcal{V}}(v-\theta)^2\;\mathrm{E}\bigg[\bigg(\sum_{k=1}^{\tau_{1}}\xi_k\bigg)^2\bigg|\hat{V}_0 = v_0\bigg] \\[3pt]&\leq \max_{v \in \mathcal{V}}(v-\theta)^2\max_{v,w \in \mathcal{V}}\mathrm{E} [\xi^2_1 \big |\hat{V}_1 = w,\hat{V}_0 = v ]\mathrm{E} [\tau^2_1 \big |\hat{V}_0 = v_0 ].\end{align*}

Thus, $\mathrm{E}\big[\big(\sum_{k=\tau_{m-1}+1}^{\tau_{m}}|\eta_k|\big)^2\big]<+\infty$ for every $m \in \mathbb{N}_0$ , from which we can apply Theorem 1, and obtain

\begin{equation*}\hat{Y}_{\lambda} \Rightarrow \gamma W,\end{equation*}

in D[0, T], where

\begin{equation*}\gamma^2 = \pi_{v_0}\mathrm{E}\left [\left (\sum_{k=1}^{\tau_1}\eta_k\right )^{2}\bigg|\hat{V}_0 = v_0\right].\end{equation*}

We are now in the position to prove the main theorem of this paper.

Theorem 13. Let $V = (V(t))_{t \geq 0}$ be a semi-Markov process with respect to $(\hat{V},S)$ , a Markov renewal process on a finite state space $\mathcal{V}$ , where the embedded Markov chain $\hat{V}$ is irreducible and aperiodic with invariant distribution $\pi$ . Let us put $\mu=\mathrm{E}_\pi[S_1]$ and assume that $\mathrm{E}_\pi[S^2_1]<+\infty$ . Then the process $X_\lambda = (X_\lambda(t))_{t \geq 0}$ defined by

\begin{equation*}X_\lambda(t) =\frac{1}{\sqrt{\lambda}} \int_0^{\lambda t}(V(s)-\theta)\mathrm{d}s,\end{equation*}

where $\theta = \mu^{-1}\mathrm{E}_\pi[\hat{V}_0S_1]$ , satisfies the following weak limit in $C[0,+\infty)$ :

\begin{equation*}X_\lambda \Rightarrow \mu^{-1/2}\gamma W,\end{equation*}

where $\gamma$ is provided by (4.7) or (4.8).

Proof. From Theorem 9, we have that $\mu = (\mathrm{E}[\tau_1|\hat{V}_0 = v_0])^{-1}\mathrm{E}[\sum_{k=1}^{\tau_1}\xi_k|\hat{V}_0 = v_0]$ . Let us assume, without loss of generality, that $\mu > 1$ . Then, by Theorem 7, the random function of D[0, T],

\begin{equation*}\Phi_{\lambda}(t) =\begin{cases}\frac{N(\lambda t)}{\lambda}, &\frac{N(\lambda T)}{\lambda} \leq T,\\[2pt]\frac{t}{\mu}, &\frac{N(\lambda T)}{\lambda} > T,\end{cases}\end{equation*}

satisfies $\Phi_{\lambda}\Rightarrow_\lambda \phi$ on D[0, T], where $\phi(t) = \mu^{-1} t$ . By Theorems 7 and 12 and the lemma on page 151 of [Reference Billingsley3], we obtain $\hat{Y}_\lambda \circ \Phi_\lambda \Rightarrow \gamma W \circ \phi$ . Moreover, $Y_\lambda = \hat{Y}_\lambda \circ \Phi_\lambda$ on the set $\{\lambda^{-1}N(\lambda T) \leq T\}$ , the probability of which goes to one by, again, Theorem 7 and the fact that $\mu >1$ . In conclusion, $Y_\lambda \Rightarrow \gamma W \circ \phi$ in D[0, T], where $\gamma W \circ \phi$ is a Gaussian process with the same distribution of $\mu^{-1/2}\gamma W$ . Now, let $h\;:\;C[0,\!T]\to D[0,\!T]$ be the identity map. By applying Theorem 11, we can state that $h\circ X_\lambda \Rightarrow \mu^{-1/2}\gamma W$ on D[0, T]. Now, from $\mathrm{P}\{X_\lambda \in C[0,T]\} \equiv \mathrm{P}\{\mu^{-1/2}\gamma W \in C[0,T]\} = 1$ , by Example 2.9 of [Reference Billingsley3], $X_\lambda \Rightarrow \mu^{-1/2}\gamma W$ on C[0, T]. From this, we can finally obtain that $X_\lambda \Rightarrow\mu^{-1/2}\gamma W$ in $C[0,+\infty)$ .

As a simple corollary of the previous theorem (easily proved by means of the continuous mapping theorem), we have a weak law of large numbers (in a functional setting) for the process $X(\lambda t)$ .

Corollary 1. Let $V = (V(t))_{t \geq 0}$ be a semi-Markov process with respect to $(\hat{V},S)$ , a Markov renewal process on a finite state space $\mathcal{V}$ , where the embedded Markov chain $\hat{V}$ is irreducible and aperiodic with invariant distribution $\pi$ . Let us put $\mu=\mathrm{E}_\pi[S_1]$ and assume that $\mathrm{E}_\pi[S^2_1]<+\infty$ . Then, $X = (X(t))_{t \geq 0}$ , the integral of V,

\begin{equation*}X(t) = \int_0^tV(s)\,\mathrm{d}s\end{equation*}

satisfies the following limit:

\begin{equation*}\lim_{\lambda \to +\infty}\mathrm{P}\left \{\sup\nolimits_{t \in [0,T]}\left |\frac{X(\lambda t)}{\lambda} - t\frac{\mathrm{E}_\pi \left [\hat{V}_0S_1 \right ]}{\mathrm{E}_\pi[S_1]}\right |>\epsilon\right \} = 0, \quad \epsilon>0.\end{equation*}

Remark 1. Under the assumptions of Theorem 13, we consider an alternative normalization of the integral of a semi-Markov process, which introduces a drift. Let $\bar{X}_\lambda = (\bar{X}_\lambda(t))_{t\geq 0}$ be the random function of $C[0+\infty)$ defined as the integral of $\bar{V}_\lambda = (\bar{V}_\lambda(t))_{t\geq 0}$ , $\bar{V}_\lambda(t) = \mathrm{E}[V(\lambda t)] + \sqrt{\lambda}(V(\lambda t)-\theta)$ :

\begin{align*}\bar{X}_\lambda(t) & = \frac{1}{\lambda}\int_0^{\lambda t}\mathrm{E}[V(s)]\,\mathrm{d}s + \frac{1}{\sqrt{\lambda}}\int_0^{\lambda t}(V(s)-\theta)\,\mathrm{d}s \nonumber\\&=\lambda^{-1}\mathrm{E}[X(\lambda t)] + \lambda^{-1/2}(X(\lambda t)-\theta \lambda t).\end{align*}

Corollary 1 ensures that, for every $t \in [0,T]$ ,

\begin{equation*}\lambda^{-1}X(\lambda t) = \frac{1}{\lambda}\int_0^{\lambda t}V(s)\mathrm{d}s\Rightarrow \theta t.\end{equation*}

Moreover, the uniform integrability of the sequence $\lambda^{-1}X(\lambda t)$ (which follows from the boundedness of V), entails

(4.9) \begin{equation}\lim_{\lambda \to +\infty}\lambda^{-1}\mathrm{E}[X(\lambda t)] = \lim_{\lambda \to +\infty}\frac{1}{\lambda}\int_0^{\lambda t}\mathrm{E}[V(s)]\,\mathrm{d}s =\theta t,\end{equation}

for every $t \in [0,T]$ . As a result of (4.9), Theorem 13 implies that

\begin{equation*}\bar{X}_\lambda \Rightarrow \text{i}\theta + \mu^{-1/2}\gamma W,\end{equation*}

where i is the identity function, $\mu = \mathrm{E}_\pi[S_1]$ , and $\gamma$ is given in (4.7) or (4.8). We remark that (4.9) can be also proved by means of (3.16) and the dominated convergence theorem (after the change of variable $s' = s/\lambda$ ).

Remark 2. Let V be a semi-Markov process satisfying the conditions of Theorem 13 and let $f\;:\;\mathcal{V}\to \mathbb{R}$ be a measurable function. Let $X = (X(t))_{t\geq 0}$ be the integral of $f\circ V$ , that is

\begin{equation*}X(t) = \int_0^tf(V(s))\,\mathrm{d}s = \sum_{k=1}^{N(t)}f (\hat{V}_{k-1} )(S_k-S_{k-1})+f (\hat{V}_{N(t)} )(t-S_{N(t)}), \quad t\geq 0.\end{equation*}

Then the process X is known as an additive functional of the semi-Markov process V. By Theorem 3, the sequence $\{f(\hat{V}_{k-1})(S_k-S_{k-1})\}_{k\geq 1}$ is delayed regenerative with respect to the stopping times (3.6), as well as $\varphi$ -mixing by Theorem 6. Therefore, by following essentially the same arguments as the proof of Theorem 13, it can be proved that $X_\lambda(t) = \lambda^{-\frac{1}{2}}(X(\lambda t)-\lambda \theta t)$ , where $\theta =\mu^{-1}\mathrm{E}_\pi[f(\hat{V}_0)S_1], \mu = \mathbb{E}[S_1]$ , satisfies

\begin{equation*}X_\lambda \Rightarrow \mu^{-\frac{1}{2}}\gamma W,\end{equation*}

with

\begin{align*}\gamma^2 & =\pi_{v_0}\mathrm{E}\bigg[\bigg(\sum_{k=1}^{\tau_1}(f(\hat{V}_{k-1})-\theta)(S_k-S_{k-1})\bigg)^{2}\bigg |\hat{V}_0 = v_0\bigg]\\[3pt]&=\sum_{v\in \mathcal{V}}\int_0^{+\infty}\!\!\!(f(v)-\theta)^2s^2\pi_vQ_{v\bullet}(\mathrm{d}s)\nonumber\\[3pt]& \quad +2\sum_{k\geq 1}\sum_{v\in \mathcal{V}}\sum_{w\in \mathcal{V}}\int_{0}^{+\infty}\!\!\!\int_{0}^{+\infty}\!\!\!(f(v)-\theta)(f(w)-\theta)st\pi_{v}Q_{v\bullet}(\mathrm{d}s)Q_{w\bullet}(\mathrm{d}t)p^{(k)}_{s;vw}.\end{align*}

5. Weak convergence of the integral of an alternating renewal process

Let $\mathcal{V} = \{v_1,\ldots,v_m\}$ and define $v_{rm+i} \,:\!=\, v_i$ for $r \in \mathbb{N}_0$ and $i=1,\ldots,m$ . An alternating renewal process is a particular case of a semi-Markov process in which a particle deterministically moves from state $v_k$ to state $v_{k+1}$ and the cycle restarts after m steps. This means that the Markov renewal-kernel is of the form

\begin{equation*}\mathrm{P} \{\hat{V}_{k+1} = v_{j},S_{k+1}-S_{k}\leq t |\hat{V}_{k} = v_i \}=Q_{v_iv_j}(t)=\mathsf{1}_{\{v_{i+1}\}}(v_j)F_{v_i v_{j}}(t),\end{equation*}

for every $i,j=1,\ldots,m$ , $k\in \mathbb{N}_0$ , and $t \geq 0$ . From the definition, it follows that the embedded Markov chain $\hat{V}$ is periodic with a period equal to m and that the transition probabilities are given by $p_{v_iv_j} = \mathsf{1}_{\{v_{i+1}\}}(v_j)$ , while the invariant distribution is given by $\pi_{v_j} = m^{-1}$ , $j=1,\ldots,m$ . Moreover it holds that

\begin{align*}& \mathrm{P} \{S_{k+1}-S_{k}\leq t |\hat{V}_{k} = v_i,\hat{V}_{k+1} = v_{i+1} \} = F_{v_iv_{i+1}}(t) \\[3pt] &\quad= \mathrm{P} \{S_{k+1}-S_{k}\leq t |\hat{V}_{k} = v_i \} = Q_{v_i\bullet}(t).\end{align*}

We note that, if $\mathrm{P}\{\hat{V}_0 = v_1\}=1$ , then $\{\hat{V}_k\}_{k\geq 0}$ becomes a deterministic sequence. Indeed, $\mathrm{P}\{\hat{V}_k = v_i\} = 1$ if there exists $r \in \mathbb{N}_0$ such that $k = rm+i-1$ , $i=1,\ldots,m$ , and, by letting $\xi_k = S_k - S_{k-1}$ , the random variables forming the sequence $\{\xi_k\}_{k \geq 1}$ are independent and satisfy $\xi_{rm+i} =_d \xi_i$ .

The alternating renewal process $V = (V(t))_{t \geq 0}$ based on $\{\hat{V}_k,S_k\}_{k \geq 0}$ can be written as

\begin{equation*}V(t) = \sum_{k=1}^m\hat{V}_{k-1}\sum_{r\geq 0}\mathsf{1}_{\{rm+k-1\}}(N(t)),\end{equation*}

where N is the counting process related to S and defined in (3.2). Let us denote $\mu_i = \mathrm{E}[\xi_1|\hat{V}_0 = v_i]$ and $\sigma^2_i = \mathbb{V}[\xi_1|\hat{V}_0 = v_i]$ . Then

(5.1) \begin{equation}\theta = \frac{\mathrm{E}_\pi [\hat{V}_0S_1 ]}{\mathrm{E}_\pi[S_1]}=\frac{\sum_{i=1}^mv_i\mu_i}{\sum_{i=1}^m\mu_i},\end{equation}

and, in this case, the integral of $V_{\lambda}(t) = \sqrt{\lambda}(V(\lambda t)-\theta)$ can be written as

\begin{equation*}X_{\lambda}(t) = \sum_{k=1}^m(\hat{V}_{k-1}-\theta)\int_0^t\sum_{r\geq 0}\mathsf{1}_{\{rm+k-1\}}(N(\lambda s))\,\mathrm{d}s,\end{equation*}

or also, as in (4.3),

\begin{align*}X_{\lambda}(t) = \frac{1}{\sqrt{\lambda}}\sum_{k = 1}^{N(\lambda t)} (\hat{V}_{k-1}-\theta )\xi_k + \frac{1}{\sqrt{\lambda}} (\hat{V}_{N(\lambda t)}-\theta )\Bigg(\lambda t-\sum_{k = 1}^{N(\lambda t)}\xi_k\Bigg).\end{align*}

Let us denote

(5.2) \begin{equation}\mu = \frac{1}{m}\sum_{i=1}^m\mu_i, \quad\gamma^2 =\frac{1}{m}\sum_{i=1}^m\sigma^2_i\bigg (v_i-\frac{\sum_{i=1}^mv_i\mu_i}{\sum_{i=1}^m\mu_i}\bigg )^2.\end{equation}

Since the alternating renewal process is periodic, we cannot apply Theorem 13 directly. The next statement provides the conditions for the weak convergence in this new framework.

Theorem 14. Let $V = (V(t))_{t \geq 0}$ be an alternating renewal process with state space $\mathcal{V}$ and assume that $\mathrm{E}[S_1^2|\hat{V}_0 = v]<+\infty$ for every $v \in \mathcal{V}$ . Then the process $X_\lambda = (X_\lambda(t))_{t \geq 0}$ defined by

\begin{equation*}X_\lambda(t) =\frac{1}{\sqrt{\lambda}} \int_0^{\lambda t}(V(s)-\theta)\,\mathrm{d}s,\end{equation*}

where $\theta$ is defined in (5.1), satisfies the following weak limit in $C[0,+\infty)$ :

\begin{equation*}X_\lambda \Rightarrow\mu^{-1/2}\gamma W,\end{equation*}

where $\mu$ and $\gamma$ are provided in (5.2).

Proof. From Theorem 3, the sequence $\{(\hat{V}_{k-1},\xi_k)\}_{k\geq 1}$ forms a delayed regenerative process with regeneration epochs $\tau_n = \inf\{k>\tau_{n-1}\,:\, \hat{V}_k = v_1\}$ , which satisfy $\tau_{n+1}-\tau_{n} = m$ for $n \in \mathbb{N}$ . By (3.11), for every $n \in \mathbb{N}$ ,

\begin{align*}\frac{\mathrm{E}\Big[\sum_{k=\tau_{n}+1}^{\tau_{n+1}}\hat{V}_{k-1}\xi_k\Big]}{\mathrm{E}[\tau_{n+1}-\tau_n]} = \frac{1}{m}\mathrm{E}\Bigg[\sum_{k=1}^{\tau_1}\hat{V}_{k-1}\xi_k\bigg|\hat{V}_0 = v_1\Bigg]=\frac{1}{m}\sum_{k=1}^{m}v_k\mathrm{E} [\xi_k |\hat{V}_0 = v_1 ] =\frac{1}{m}\sum_{i=1}^{m}v_i\mu_i,\end{align*}

from which we have

(5.3) \begin{align}\mathrm{E}\bigg[\sum_{k=\tau_{1}+1}^{\tau_{2}} (\hat{V}_{k-1}-\theta )\xi_k\bigg] = \sum_{i=1}^{m}v_i\mu_i-\frac{\sum_{i=1}^mv_i\mu_i}{\sum_{i=1}^m\mu_i}\sum_{i=1}^m\mu_i = 0.\end{align}

Moreover,

\begin{align*}\mathrm{E}\bigg[\bigg(\sum_{k=\tau_{n}+1}^{\tau_{n+1}} |\hat{V}_{k-1}-\theta |\xi_k\bigg)^2\bigg] & = \mathrm{E} \bigg [\bigg (\sum_{k=1}^{\tau_{1}} |\hat{V}_{k-1}-\theta |\xi_k\bigg)^2\bigg|\hat{V}_0 = v_1\bigg ] \\[3pt]&\leq\max_{i=1,\ldots,m}(v_i-\theta)^2 \mathrm{E}\bigg[\bigg(\sum_{k=1}^{m}\xi_k\bigg)^2\bigg|\hat{V}_0 = v_1\bigg] \\[3pt]&= \max_{i=1,\ldots,m}(v_i-\theta)^2\bigg[\sum_{i=1}^{m}\sigma^2_i+\bigg(\sum_{i=1}^{m}\mu_i\bigg)^2\bigg] <+\infty.\end{align*}

Therefore, from Theorem 1, it follows that the process $\hat{Y}_\lambda$ defined as

\begin{equation*}\hat{Y}_\lambda(t) = \frac{1}{\sqrt{\lambda}}\sum_{k=1}^{\lfloor \lambda t\rfloor} (\hat{V}_{k-1}-\theta )\xi_k, \quad t \in [0,T],\end{equation*}

satisfies $\hat{Y}_\lambda \Rightarrow \gamma W$ on D[0, T], where, from (2.2), and by keeping in mind (5.3),

\begin{align*}\gamma^2 &= \left (\mathrm{E}[\tau_2-\tau_1]\right )^{-1}\mathrm{E}\bigg[\bigg(\sum_{k=\tau_{1}+1}^{\tau_{2}} \left (\hat{V}_{k-1}-\theta \right )\xi_k\bigg)^2\bigg] = \frac{1}{m}\mathrm{E}\bigg[\bigg(\sum_{k=1}^{\tau_{1}} \left (\hat{V}_{k-1}-\theta \right )\xi_k\bigg)^2\bigg|\hat{V}_0 = v_1\bigg]\\[2pt]&= \frac{1}{m}\sum_{i=1}^m\sigma^2_i\bigg(v_i-\frac{\sum_{i=1}^mv_i\mu_i}{\sum_{i=1}^m\mu_i}\bigg)^2.\end{align*}

Now, Theorem 7 implies that

\begin{equation*}\sup\nolimits_{t \in [0,T]}\biggl|\frac{N(\lambda t)}{\lambda}-\frac{tm}{\sum_{i=1}^m\mu_i}\biggr| \Rightarrow 0.\end{equation*}

Thus, we can use Lemma 1, together with Theorem 8, to conclude that

\begin{equation*}h\circ X_\lambda \Rightarrow \mu^{-1/2}\gamma W,\end{equation*}

on D[0, T], where $h\;:\;C[0,T]\to D[0,T]$ is the identity map. By using the same arguments as the proof of Theorem 13, we obtain the thesis.

Remark 3. If, according to Remark 1, we consider the alternative normalization:

\begin{align*}\bar{X}_\lambda(t) & = \lambda^{-1}\mathrm{E}[X(\lambda t)] + \lambda^{-1/2}(X(\lambda t)-\theta \lambda t)=\frac{1}{\lambda}\int_0^{\lambda t}\mathrm{E}[V(s)]\mathrm{d}s + \frac{1}{\sqrt{\lambda}}\int_0^{\lambda t}(V(s)-\theta)\,\mathrm{d}s\nonumber\\&=\sum_{k=1}^m\int_0^t\sum_{r\geq 0}\mathrm{E} [\hat{V}_{k-1}\mathsf{1}_{\{rm+k-1\}}(N(\lambda s))]\mathrm{d}s\nonumber+\sum_{k=1}^m (\hat{V}_{k-1}-\theta )\int_0^t\sum_{r\geq 0}\mathsf{1}_{\{rm+k-1\}}(N(\lambda s)\,\mathrm{d}s\nonumber.\end{align*}

The previous theorem ensures that, for every $t \in [0,T]$ ,

\begin{equation*}\bar{X}_\lambda(t) -\lambda^{-1}\mathrm{E}[X(\lambda t)] = \frac{1}{\sqrt{\lambda}}\int_0^{\lambda t}(V(s)-\theta)\,\mathrm{d}s \Rightarrow \mu^{-1/2}\gamma W(t),\end{equation*}

which implies

\begin{equation*}\frac{X(\lambda t)}{\lambda}=\frac{1}{\lambda}\int_0^{\lambda t}V(s)\,\mathrm{d}s\Rightarrow \theta t.\end{equation*}

Then, by uniform integrability, we obtain

\begin{equation*}\lim_{\lambda \to +\infty}\frac{1}{\lambda}\mathrm{E}[X(\lambda t)] = \theta t;\end{equation*}

hence,

\begin{equation*}\bar{X}_\lambda \Rightarrow \text{i}\theta + \mu^{-1/2}\gamma W,\end{equation*}

where i is the identity function.

5.1. Application to the generalized telegraph process

Here we show an application of the previous results to the asymmetric telegraph process (see [Reference Beghin, Nieddu and Orsingher1, Reference Cinque5, Reference Stadje and Zack34]).

Let $(\hat{V},S)$ be an alternating renewal process where $\mathcal{V} = \{v_1,v_2\}$ , with $v_1,v_2 \in \mathbb{R}$ and $v_{2r+i} \, :\!= \, v_i$ for $r \in \mathbb{N}_0$ , $i=1,2$ . We assume that

(5.4) \begin{equation}Q_{v_iv_{j}}(t) = \mathsf{1}_{\{v_{i+1}\}}(v_j) \left (1- \mathrm{e}^{-\lambda_i t} \right )\!.\end{equation}

Hence, for every $k \in \mathbb{N}$ , $t\geq 0$ ,

\begin{align*}\mathrm{P} \{\hat{V}_k = v_i,S_k-S_{k-1}\leq t \big |\hat{V}_{k-1} = v_{i-1} \} & = \mathrm{P} \{S_k-S_{k-1}\leq t \big |\hat{V}_{k-1} = v_{i-1} \} \\[2pt]&= \begin{cases}1- \mathrm{e}^{-\lambda_1t}, &v_{i-1}=v_1\\[2pt]1- \mathrm{e}^{-\lambda_2t}, &v_{i-1}=v_2.\end{cases}\end{align*}

If we set $N(t) = \max\{k\in \mathbb{N}_0\,:\, S_k \leq t\}$ , then $N = (N(t))_{t\geq0}$ is the alternating Poisson process of parameters $\lambda_1,\lambda_2>0$ . By defining $\lambda_{2r+i} = \lambda_i$ for $r \in \mathbb{N}_0$ , $i=1,2$ , it can be proved that

\begin{align*}\mathrm{P} \{N(t) = n |\hat{V}_0 = v_i \} = \begin{cases}(\lambda_it)^{k}(\lambda_{i+1}t)^{k}\mathrm{e}^{-\lambda_i t}W_{k,k+1}\bigl(t(\lambda_i-\lambda_{i+1})\bigr), &n=2k+2,\\[2pt](\lambda_it)^{k+1}(\lambda_{i+1}t)^{k}\mathrm{e}^{-\lambda_i t}W_{k+1,k+1}\bigl(t(\lambda_i-\lambda_{i+1})\bigr),&n=2k+1,\end{cases}\end{align*}

for $k \in \mathbb{N}_0$ , where $W_{\alpha,\beta}(x) = (\Gamma(\alpha)\Gamma(\beta))^{-1}\int_{0}^1t^{\alpha-1}(1-t)^{\beta-1} \mathrm{e}^{-x t} \,\mathrm{d} t$ .

Let $V(t) = \hat{V}_{N(t)}$ ; from (5.4), it follows that V is a Markov process with generator

\begin{equation*}A=\begin{pmatrix}-\lambda_1 &\quad \lambda_1 \\[2pt]\lambda_2 &\quad -\lambda_2\end{pmatrix}.\end{equation*}

Then Kolmogorov’s forward equation can be used to obtain

\begin{align*}&\mathrm{P}\{V(t) = v|V(0) = v_i\} =\begin{cases}\frac{\lambda_{i+1}}{\lambda_i+\lambda_{i+1}}+\frac{\lambda_i}{\lambda_i+\lambda_{i+1}}\mathrm{e}^{-(\lambda_i+\lambda_{i+1})t}, &v=v_i,\\[3pt]\frac{\lambda_i}{\lambda_i+\lambda_{i+1}}-\frac{\lambda_i}{\lambda_i+\lambda_{i+1}}\mathrm{e}^{-(\lambda_i+\lambda_{i+1})t}, &v=v_{i+1}.\end{cases}\end{align*}

If $(p,1-p)$ is the initial distribution of V(0), then the law of V takes the form

\begin{align*}&\mathrm{P}\{V(t) = v\} =\begin{cases}\frac{\lambda_2}{\lambda_1+\lambda_2}+\frac{p\lambda_1-(1-p)\lambda_2}{\lambda_1+\lambda_2}\mathrm{e}^{-(\lambda_1+\lambda_2)t}, &v=v_1,\\[3pt]\frac{\lambda_1}{\lambda_1+\lambda_2}-\frac{p\lambda_1-(1-p)\lambda_2}{\lambda_1+\lambda_2}\mathrm{e}^{-(\lambda_1+\lambda_2)t}, &v=v_2,\end{cases}\end{align*}

from which we obtain

\begin{equation*}\mathrm{E}[V(t)] = \frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}+\frac{(p\lambda_1-(1-p)\lambda_2)(v_1-v_2)}{\lambda_1+\lambda_2}\mathrm{e}^{-(\lambda_1+\lambda_2)t},\end{equation*}

and

\begin{equation*}\theta = \frac{\mathrm{E}_\pi [\xi_1\hat{V}_0 ]}{\mathrm{E}_\pi[\xi_1]} = \frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}.\end{equation*}

Let X be the integral of V; then the density of X(t) satisfies the following hyperbolic equation:

(5.5) \begin{align}&\frac{\partial^2u}{\partial t^2} + (\lambda_1+\lambda_2)\frac{\partial u}{\partial t} +(v_1+v_2)\frac{\partial^2u}{\partial x\partial t} +(v_1\lambda_2+v_2\lambda_1)\frac{\partial u}{\partial x}= -v_1v_2\frac{\partial^2u}{\partial x^2}.\end{align}

In this case, X can be interpreted as the trajectory of a particle moving with constant velocities $v_1,v_2$ and subject to reversals of direction at the jump times of the alternating Poisson process of rates $\lambda_1,\lambda_2$ . Therefore, the mean lengths of the displacements are $1/\lambda_1$ and $1/\lambda_2$ , according to the current speed.

Then $\bar{X}_\lambda$ , the integral of V under the reparameterization of Remark 3, becomes

\begin{align*}\bar{X}_\lambda(t) &=\frac{(p\lambda_1-(1-p)\lambda_2)(v_1-v_2)}{\lambda(\lambda_1+\lambda_2)^2} \left (1-\mathrm{e}^{-(\lambda_1+\lambda_2)\lambda t} \right )\nonumber\\[3pt]& \quad +\int_0^{t}\sqrt{\lambda}\left (\!V(\lambda s)-\frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}\left (1-\frac{1}{\sqrt{\lambda}}\right )\right ) \, \mathrm{d} s.\end{align*}

If we define the process $T_\lambda = (T_\lambda(t))_{t\geq0}$ by

\begin{equation*}T_\lambda(t) = \int_0^{t}\sqrt{\lambda}\left (V(\lambda s)-\frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}\left (1-\frac{1}{\sqrt{\lambda}}\right )\right ) \, \mathrm{d} s,\end{equation*}

then $\bar{X}_\lambda -T_\lambda \Rightarrow 0$ , and so $\bar{X}_\lambda$ and $T_\lambda$ have the same weak limit. Now, $T_\lambda$ is an asymmetric telegraph process with rates $\lambda\lambda_1$ , $\lambda\lambda_2$ and velocities

\begin{align*}\sqrt{\lambda}(v_1-\theta)+\theta & = \frac{\sqrt{\lambda}\lambda_1(v_1-v_2)+v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}, \\[3pt]\sqrt{\lambda}(v_2-\theta)+\theta & = \frac{\sqrt{\lambda}\lambda_2(v_2-v_1)+v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}.\end{align*}

We note that, in the limit, the two velocities become of opposite signs. Thus, according to Remark 3, we can conclude that an asymmetric telegraph process $T_\lambda$ with those parameters converges weakly to Brownian motion with drift and scaling parameter respectively given by

\begin{align*}\theta &= \frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2} ,\frac{\gamma^2}{\mu} &= 2\frac{\lambda_1\lambda_2(v_1-v_2)^2}{(\lambda_1+\lambda_2)^{3}}.\end{align*}

Moreover, from (5.5), it follows that the density of $T_\lambda$ is a solution of the following partial differential equation:

\begin{eqnarray*} & \dfrac{1}{\lambda(\lambda_1+\lambda_2)}\dfrac{\partial^2u}{\partial t^2} + \dfrac{\partial u}{\partial t} + \left (\dfrac{2(v_1\lambda_2+v_2\lambda_1)}{\lambda(\lambda_1+\lambda_2)^2} + \dfrac{(v_2-v_1)(\lambda_1-\lambda_2)}{\sqrt{\lambda}(\lambda_1+\lambda_2)^2}\right )\dfrac{\partial^2u}{\partial x\partial t} + \dfrac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}\dfrac{\partial u}{\partial x} \\[8pt]\nonumber & =\left (\dfrac{\lambda_1\lambda_2(v_1-v_2)^2}{(\lambda_1+\lambda_2)^{3}}-\dfrac{(v_1\lambda_2+v_2\lambda_1)(\lambda_1-\lambda_2)(v_1-v_2)}{\sqrt{\lambda}(\lambda_1+\lambda_2)^2}-\dfrac{(v_1\lambda_2+v_2\lambda_1)^{2}}{\lambda(\lambda_1+\lambda_2)^{3}}\right )\dfrac{\partial^2 u}{\partial x^2},\end{eqnarray*}

and, as $\lambda\to+\infty$ , we obtain

\begin{equation*}\frac{\partial u}{\partial t}+\frac{v_1\lambda_2+v_2\lambda_1}{\lambda_1+\lambda_2}\frac{\partial u}{\partial x}=\frac{\lambda_1\lambda_2(v_1-v_2)^2}{(\lambda_1+\lambda_2)^{3}}\frac{\partial^2u}{\partial x^2},\end{equation*}

which is the equation governing the law of Brownian motion with drift $\theta$ and scaling parameter $\mu^{-1/2}\gamma$ .

In the case in which $v_1=-v_2 = v_0$ and $\lambda_1 = \lambda_2 = \lambda_0$ , then N is a Poisson process of rate $\lambda_0$ , and $V(t) = V(0)(-1)^{N(t)}$ , where V(0) has distribution $(1-p,p)$ . It follows that

\begin{align*}&\mathrm{P}\{V(t) = v\} =\begin{cases}\frac{1}{2}+ \left (p-\frac{1}{2} \right ) \mathrm{e} ^{-2\lambda_0t}, &v=v_0,\\[3pt]\frac{1}{2}- \left (p-\frac{1}{2} \right ) \mathrm{e} ^{-2\lambda_0t}, &v=-v_0,\end{cases}\end{align*}

and so $\mathrm{E}[V(t)] = v_0(2p-1) \mathrm{e} ^{-2\lambda_0t}$ and $\theta = 0$ . The integral of $\sqrt{\lambda}V(\lambda t)$ , taking the form

\begin{equation*}X_\lambda(t) = \int_0^{t}\sqrt{\lambda}V(0)(-1)^{N(\lambda s)} \,\mathrm{d} s,\end{equation*}

is a telegraph process with modulus of velocity $\sqrt{\lambda}v_0$ and rate of change of direction $\lambda\lambda_0$ , and satisfies $T_\lambda \Rightarrow v_0\lambda_0^{-1/2}W$ . Furthermore, the density of $T_\lambda(t)$ is a solution of the telegraph equation

\begin{equation*}\frac{1}{2\lambda\lambda_0}\frac{\partial^2 u}{\partial t^2}+\frac{\partial u}{\partial t} = \frac{v_0^2}{2\lambda_0}\frac{\partial^2u}{\partial x^2},\end{equation*}

and, by taking the limit as $\lambda \to +\infty$ , the heat equation emerges

\begin{equation*}\frac{\partial u}{\partial t}=\frac{v_0^2}{2\lambda_0}\frac{\partial^2u}{\partial x^2}.\end{equation*}

6. Concluding remarks

As shown in our work, the semi-Markov process inherits most of its properties from the embedded Markov chain. By examining Theorems 5 and 6, we see that the key condition ensuring the ergodicity and the mixing property of the sequence $\{f(\hat{V}_{k-1},\xi_k)\}_{k\geq 1}$ relies on the estimate of the distance between the transition probabilities and the invariant distribution of $\hat{V}$ . When the state space is finite, the rate of convergence is exponential. This also holds, for example, when the embedded Markov chain satisfies Doeblin’s condition (see [Reference Doob13, pp. 190 ff.] or [Reference Rosenthal32]), which is stated as follows. Let X be a Markov process on a state space $\mathcal{X}$ with transition kernel P. If we assume that there exists some probability measure $\zeta$ over $\mathcal{X}$ and $\beta>0$ , such that

\begin{align*}P(x,A)\geq \beta \zeta(A),\end{align*}

for all $x \in \mathcal{X}$ and for all measurable subsets $A\subseteq \mathcal{X}$ , then, given any initial distribution $\mu_0$ and stationary distribution $\pi$ , we have

\begin{equation*}\lVert \mu_n-\pi \rVert \leq (1-\beta)^n.\end{equation*}

Hence, under this condition, by suitably adapting the theory developed here, most of our results can be extended to semi-Markov processes with countable state space by following the same proof strategies. Moreover, our framework can be naturally generalized to the multidimensional case, that is, when V takes values in a state space of dimension $d>1$ . Indeed, Theorems 3 and 6 can be adapted to the setting where the function f maps $(\hat{V}_{k-1},\xi_k)$ into $\mathbb{R}^d$ . In particular, our theorems and their proofs can be exploited to study weak convergence in $C[0,+\infty)$ for multidimensional extensions of the telegraph process, known as planar random motions or multidimensional random evolutions (see [Reference Cinque and Cintoli7, Reference Kolesnik and Orsingher20, Reference Orsingher and Kolesnik29, Reference Orsingher, Garra and Zeifman30).

Funding information

No funding bodies supported the creation of this paper.

Competing interests

There are no competing interests, arising during the preparation or publication of this paper, to declare.

References

Beghin, L., Nieddu, L. and Orsingher, E. (2001). Probabilistic analysis of the telegrapher’s process with drift by means of relativistic transformations. J. Appl. Math. Stochastic Anal. 14, 1125.Google Scholar
Billingsley, P. (1995). Probability and Measure, 3rd edn. John Wiley, New York.Google Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.CrossRefGoogle Scholar
Çinlar, E. (1975). Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs.Google Scholar
Cinque, F. (2022). A note on the conditional probabilities of the telegraph process. Stat. Probab. Lett. 185, 109431.CrossRefGoogle Scholar
Cinque, F. (2023). Reflection principle for finite-velocity random motions. J. Appl. Probab. 60, 479492.CrossRefGoogle Scholar
Cinque, F. and Cintoli, M. (2024). Multidimensional random motions with a natural number of finite velocities. Adv. Appl. Probab. 56, 10331063.CrossRefGoogle Scholar
Cinque, F. and Orsingher, E. (2021). On the exact distribution of the maximum of the asymmetric telegraph process. Stochastic Processes Appl. 142, 601633.CrossRefGoogle Scholar
De Bruyne, B., Majumdar, S. N. and Schehr, G. (2021). Survival probability of a run-and-tumble particle in the presence of a drift. J. Stat. Mech. 4, 0432113.Google Scholar
De Gregorio, A. (2010). Stochastic velocity motions and processes with random time. Adv. Appl. Probab. 42, 10281056.CrossRefGoogle Scholar
Di Crescenzo, A. (2001). On random motions with velocities alternating at Erlang-distributed random times. Adv. Appl. Probab. 33, 690701.CrossRefGoogle Scholar
Di Crescenzo, A. and Pellerey, F. (2002). On prices’ evolutions based on geometric telegrapher’s process. Appl. Stochastic Models Bus. Ind. 18, 171184.CrossRefGoogle Scholar
Doob, J. L. (1953). Stochastic Processes. John Wiley, New York.Google Scholar
Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and Convergence. John Wiley, New York.CrossRefGoogle Scholar
Ghosh, A. P., Rastegar, R. and Roitershtein, A. (2014). On a directionally reinforced random walk. Proc. Am. Math. Soc. 142, 32693283.CrossRefGoogle Scholar
Glynn, P. W. and Haas, P. J. (2004). On functional central limit theorems for semi-Markov and related processes. Commun. Stat.—Theory Methods 33, 487506.CrossRefGoogle Scholar
Holmes, E. E., Lewis, M. A., Banks, J. E. and Veit, R. R. (1994). Partial differential equations in ecology: spatial interactions and population dynamics. Ecology 75, 1729.CrossRefGoogle Scholar
Horvát, L. and Shao, Q. (1998). Limit distributions of directionally reinforced random walks. Adv. Math. 134, 367383.CrossRefGoogle Scholar
Khorshidian, K. (2009). Central limit theorem for nonlinear semi-Markov reward processes. Stochastic Anal. Appl. 27, 656670.CrossRefGoogle Scholar
Kolesnik, A. D. and Orsingher, E. (2001). Analysis of finite velocity planar random motion with reflection. Theory Probab. Appl. 46, 132140.CrossRefGoogle Scholar
Kolesnik, A. D. and Ratanov, N. (2022). Telegraph Processes and Option Pricing 2nd edn. Springer, Heidelberg.Google Scholar
Logachov, A., Mogulskii, A., Prokopenko, E. and Yambartsev, A. (2021). Local theorems for (multidimensional) additive functionals of semi-Markov chains. Stochastic Processes Appl. 137, 149166.CrossRefGoogle Scholar
Lopez, O. and Ratanov, N. (2014). On the asymmetric telegraph processes. J. Appl. Probab. 51, 569589.CrossRefGoogle Scholar
Mauldin, R. D., Monticino, M. and von Weizsäcker, H. (1996). Directionally reinforced random walks. Adv. Math. 117, 239252.CrossRefGoogle Scholar
Meerschaert, M. and Skarta, P. (2014). Semi-Markov approach to continuous time random walk limit processes. Ann. Probab. 42, 16991723.CrossRefGoogle Scholar
Mertens, K., Angelani, L., Di Leonardo, R. and Bocquet, L. (2012). Probability distributions for the run-and-tumble bacterial dynamics: An analogy to the Lorentz model. Eur. Phys. J. 35, 84.Google Scholar
Oprisan, A. (2026). Large deviation principle for additive functionals of semi-Markov processes. Stochastic Anal. Appl. 41, 257275.CrossRefGoogle Scholar
Orsingher, E. (1990). Probability law, flow function, maximum distribution of wave-governed random motions and their connections with Kirchoff’s laws. Stochastic Processes Appl. 34, 4966.CrossRefGoogle Scholar
Orsingher, E. and Kolesnik, A. D. (1990). Exact distribution for a planar random motion model controlled by a fourth-order hyperbolic equation. Theory Probab. Appl. 41, 379386.Google Scholar
Orsingher, E., Garra, R. and Zeifman, A. I. (2020). Cyclic random motions with orthogonal directions. Markov Processes Relat. Fields 26, 381402.Google Scholar
Pedicone, A. and Orsingher, E. (2026). On the distribution of the telegraph meander and its properties. Stochastic Processes Appl. 195, 104887.CrossRefGoogle Scholar
Rosenthal, J. S. (1995). Convergence rates for Markov chains. SIAM Rev. 37, 387405.CrossRefGoogle Scholar
Sigman, K. and Wolff, R. W. (1993). A review of regenerative processes. SIAM Rev. 35, 269288.CrossRefGoogle Scholar
Stadje, W. and Zack, S. (2004). Telegraph processes with random velocities. J. Appl. Probab. 41, 665678.CrossRefGoogle Scholar
Walters, P. (1982). An Introduction to Ergodic Theory. Springer, Berlin.CrossRefGoogle Scholar
Zauderer, E. (1989). Partial Differential Equations of Applied Mathematics. John Wiley, New York.Google Scholar