Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-10-29T23:50:53.136Z Has data issue: false hasContentIssue false

Sandwiched SDEs with unbounded drift driven by Hölder noises

Published online by Cambridge University Press:  08 March 2023

Giulia Di Nunno*
Affiliation:
University of Oslo and NHH Norwegian School of Economics
Yuliya Mishura*
Affiliation:
Taras Shevchenko National University of Kyiv
Anton Yurchenko-Tytarenko*
Affiliation:
University of Oslo
*
*Postal address: Department of Mathematics, University of Oslo, Moltke Moes vei 35, 0851 Oslo, Norway. Email address: giulian@math.uio.no
**Postal address: Department of Probability, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Volodymyrska St. 64/13, Kyiv 01601, Ukraine. Email address: yuliyamishura@knu.ua
***Postal address:Department of Mathematics, University of Oslo, Moltke Moes vei 35, 0851 Oslo, Norway. Email address: antony@math.uio.no
Rights & Permissions [Opens in a new window]

Abstract

We study a stochastic differential equation with an unbounded drift and general Hölder continuous noise of order $\lambda \in (0,1)$. The corresponding equation turns out to have a unique solution that, depending on a particular shape of the drift, either stays above some continuous function or has continuous upper and lower bounds. Under some mild assumptions on the noise, we prove that the solution has moments of all orders. In addition, we provide its connection to the solution of some Skorokhod reflection problem. As an illustration of our results and motivation for applications, we also suggest two stochastic volatility models which we regard as generalizations of the CIR and CEV processes. We complete the study by providing a numerical scheme for the solution.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

Introduction

Stochastic differential equations (SDEs) whose solutions take values in a given bounded domain are widely applied in several fields. Just as an illustration, we can consider the Tsallis–Stariolo–Borland (TSB) model employed in biophysics, defined as

(0.1) \begin{equation} dY_1(t) = - \frac{\theta Y_1(t)}{1-Y^2_1(t)}dt + \sigma dW(t), \quad \theta>0,\,\sigma>0,\end{equation}

with W being a standard Wiener process. If $\frac{\sigma^2}{\theta} \in (0,1]$ , the TSB process is ‘sandwiched’ between $-1$ and 1 (for more details, see e.g. [Reference Domingo, d’Onofrio and Flandoli16, Subsection 2.3] or [Reference D’Onofrio17, Chapter 3 and Chapter 8]). Another example is the Cox–Ingersoll–Ross (CIR) process [Reference Cox, Ingersoll and Ross12, Reference Cox, Ingersoll and Ross13, Reference Cox, Ingersoll and Ross14], defined via an SDE of the form

\begin{equation*} dX(t) = (\theta_1 - \theta_2 X(t))dt + \sigma \sqrt{X(t)}dW(t), \quad \theta_1,\,\theta_2,\,\sigma \gt0.\end{equation*}

Under the so-called Feller condition $2\theta_1 \ge \sigma^2$ , the CIR process is bounded below (more precisely, is positive) almost surely (a.s.), which justifies its popularity in the modeling of interest rates and stochastic volatility in finance. Moreover, by [Reference Mishura and Yurchenko-Tytarenko29, Theorem 2.3], the square root $Y_2(t) \,:\!=\, \sqrt{X(t)}$ of the CIR process satisfies an SDE of the form

(0.2) \begin{equation} dY_2(t) = \frac{1}{2}\left(\frac{\theta_1 - \sigma^2/4}{Y_2(t)} - \theta_2Y_2(t)\right)dt + \frac{\sigma}{2}dW(t),\end{equation}

and the SDEs (0.1) and (0.2) both have an additive noise term and an unbounded drift with points of singularity at the bounds ( $\pm 1$ for the TSB and 0 for the CIR), which have a ‘repelling’ action, so that the corresponding processes never cross or even touch the bounds.

The goal of this paper is to study a family of SDEs of a type similar to (0.1) and (0.2), namely

(0.3) \begin{equation} Y(t) = Y(0) + \int_0^t b(s, Y(s)) ds + Z(t), \quad t\in[0,T],\end{equation}

where the drift b is unbounded. We consider separately two cases:

  1. (A) In the first case, b is a real function defined on the set $\{(t,y)\in[0,T]\times\mathbb R\,|\,y \gt \varphi(t)\}$ such that b(t, y) has an explosive growth of the type $(y - \varphi(t))^{-\gamma}$ as $y \downarrow \varphi(t)$ , where $\varphi$ is a given Hölder continuous function and $\gamma \gt 0$ . We will see that the process Y satisfying (0.3) is bounded below by $\varphi$ , i.e.

    (0.4) \begin{equation} Y(t) \gt \varphi(t), \quad a.s., \quad t\in[0,T], \end{equation}
    which we will called a one-sided sandwich.
  2. (B) In the second case, b is a real function defined on the set $\{(t,y)\in[0,T]\times\mathbb R\,|\,\varphi(t) \lt y \lt \psi(t)\}$ such that b(t, y) has an explosive growth of the type $(y - \varphi(t))^{-\gamma}$ as $y \downarrow \varphi(t)$ and an explosive decrease of the type $-(\psi(t) - y)^{-\gamma}$ as $y \uparrow \psi(t)$ , where $\varphi$ and $\psi$ are given Hölder continuous functions such that $\varphi(t) \lt \psi(t)$ , $t\in[0,T]$ , and $\gamma \gt 0$ . We will see that in this case the solution to (0.3) turns out to be sandwiched, namely

    (0.5) \begin{equation} \varphi(t) \lt Y(t) \lt \psi(t) \quad a.s., \quad t\in[0,T], \end{equation}
    as a two-sided sandwich.

The noise term Z in (0.3) is an arbitrary $\lambda$ -Hölder continuous noise, $\lambda\in(0,1)$ . Our main motivation to consider Z from such a general class, instead of the classical Wiener process, lies in the desire to go beyond Markovianity and include memory in the dynamics (0.3) via the noise term. It should be noted that the presence of memory is a commonly observed empirical phenomenon (in this regard, we refer the reader to [Reference Beran6, Chapter 1], where examples of datasets with long memory are collected, and to [Reference Samorodnitsky32] for more details on stochastic processes with long memory). The particular application which we have in mind throughout this paper comes from finance, where the presence of market memory is well known and has been extensively studied (see e.g. [Reference Anh and Inoue3, Reference Ding, Granger and Engle15, Reference Yamasaki36] or [Reference Tarasov35] for a detailed historical overview of the subject). Processes with memory in the noise have been used as stochastic volatilities, allowing for the inclusion of empirically detected features such as volatility smiles and skews in long-term options [Reference Comte, Coutin and Renault10]; see also [Reference Bollerslev and Mikkelsen8, Reference Chronopoulou and Viens9] for more details on long-memory models and [Reference Gatheral, Jaisson and Rosenbaum21] for short memory coming from the microstructure of the market. Some studies (see e.g. [Reference Alfi, Coccetti, Petri and Pietronero1]) indicate that the roughness of the volatility changes over time, which justifies the choice of multifractional Brownian motion [Reference Ayache and Peng4] or even general Gaussian Volterra processes [Reference Merino25] as drivers. Separately we mention the series of papers [Reference Mishura and Yurchenko-Tytarenko26, Reference Mishura and Yurchenko-Tytarenko27, Reference Mishura and Yurchenko-Tytarenko28], which study an SDE of the type (0.2) with memory introduced via a fractional Brownian motion with $H> \frac{1}{2}$ :

(0.6) \begin{equation}dY(t) = \left(\frac{\theta_1}{Y(t)} - \theta_2 Y(t)\right) dt + \sigma dB^H(t), \quad \theta_1,\,\theta_2,\,\sigma \gt0,\,t\in[0,T].\end{equation}

Our model (0.3) can thus be regarded as a generalization of (0.6) accommodating a highly flexible choice of noise to deal with the problems of local roughness mentioned above.

In this paper, we first consider the existence and uniqueness of a solution to (0.3), then focus on the moments of both positive and negative orders. It should be stressed that the inverse moments are crucial for e.g. numerical simulation of (0.3), since it is necessary to control the explosive growth of the drift near bounds. We recognize that similar problems concerning equations of the type (0.3) with lower bound $\varphi \equiv 0$ and the noise Z being a fractional Brownian motion with $H>\frac{1}{2}$ were addressed in [Reference Hu, Nualart and Song23]. There, the authors used pathwise arguments to prove the existence and uniqueness of the solution, whereas a Malliavin-calculus-based method was applied to obtain finiteness of the inverse moments. Despite its elegance, the latter technique requires the noise to be Gaussian and, moreover, is unable to ensure the finiteness of the inverse moments on the entire time interval [0, T]. These disadvantages of the Malliavin method resulted in restrictive conditions involving all parameters of the model and T in the numerical schemes in e.g. [Reference Hong, Huang, Kamrani and Wang22, Theorem 4.2] and [Reference Zhang and Yuan38, Theorem 4.1].

The approach we take is to use pathwise calculus together with stopping times arguments for the inverse moments as well. This allows us, on the one hand, to choose from a much broader family of noises well beyond the Gaussian framework and, on the other hand, to prove the existence of the inverse moments of the solution on the entire interval [0, T]. The corresponding inverse moment bounds are presented in Theorems 2.4 and 4.2.

In addition, we establish a connection of a certain class of sandwiched processes to Skorokhod’s notion of reflected processes (see e.g. [Reference Skorokhod33, Reference Skorokhod34] for more details). Note that (0.4) contains a strict inequality, i.e. the one-sided sandwich Y does not reflect from the boundary $\varphi$ . However, as $\varepsilon \to 0$ , the process $Y_\varepsilon$ of the form

\begin{equation*} Y_\varepsilon(t) = Y(0) + \int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds - \int_0^t \alpha(s, Y_\varepsilon(s))ds + Z(t),\end{equation*}

with $\alpha$ : $[0,T]\times \mathbb R \to \mathbb R$ being a Lipschitz function, converges to the solution of a certain Skorokhod reflection problem, with $\int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds$ converging to the corresponding regulator. This result substantially expands and generalizes [Reference Mishura and Yurchenko-Tytarenko29], where a similar result was obtained specifically for processes of the form (0.6).

This paper is organized as follows. In Section 1, the general framework is described and the main assumptions are listed. Furthermore, some examples of possible noises Z are provided (including continuous martingales and Gaussian Volterra processes). In Section 2, we provide the existence and uniqueness of the solution to (0.3) in the one-sided sandwich case, and we derive upper and lower bounds for the solution in terms of the noise and study the finiteness of $\mathbb E\left[ \sup_{t\in[0,T]} |Y(t)|^r\right]$ and $\mathbb E\left[ \sup_{t\in[0,T]} (Y(t) - \varphi(t))^{-r}\right]$ , $r\ge 1$ . In Section 3, we establish a connection between one-sided sandwiched processes and Skorokhod’s reflected processes. Section 4 is devoted to studying the two-sided sandwich case (0.5): existence, uniqueness, and properties of the solution are provided. Our approach is readily applied to introduce the generalized CIR and CEV processes (see [Reference Andersen and Piterbarg2, Reference Cox11]) in Section 5. Finally, to illustrate our results, we provide simulations in Section 6. Details on the simulation algorithm are given in Appendix A.

1. Preliminaries and assumptions

In this section, we present the framework for the noise Z and the drift functional b from Equation (0.3), and we provide some auxiliary results that will be required later.

We will start from the noise term Z in (0.3).

Assumption 1.1. $Z = \{Z(t),\,t\in[0,T]\}$ is a stochastic process such that

  1. (Z1) $Z(0) = 0$ a.s.;

  2. (Z2) Z has Hölder continuous paths of order $\lambda\in(0,1)$ , i.e. there exists a random variable $\Lambda = \Lambda_{\lambda}(\omega) \gt0$ such that

    (1.1) \begin{equation} |Z(t) - Z(s)| \le \Lambda |t-s|^{\lambda}, \quad t,s \in [0,T]. \end{equation}

Note that we do not require any particular assumptions on the distribution of the noise (e.g. Gaussianity), but for some results we will need the random variable $\Lambda$ from (1.1) to have moments of sufficiently high orders. In what follows, we list several examples of admissible noises and properties of the corresponding random variable $\Lambda$ .

Example 1.1. (Hölder continuous Gaussian processes.) Let $Z = \{Z(t),\,t\ge 0\}$ be a centered Gaussian process with $Z(0) = 0$ , and let $H\in(0,1)$ be a given constant. Then, by [Reference Azmoodeh, Sottinen, Viitasaari and Yazigi5], Z has a modification with Hölder continuous paths of any order $\lambda\in(0,H)$ if and only if for any $\lambda\in(0,H)$ there exists a constant $C_{\lambda} \gt 0$ such that

(1.2) \begin{equation} \left( \mathbb E |Z(t) - Z(s)|^{2} \right)^{\frac{1}{2}} \le C_{\lambda}|t-s|^{\lambda}, \quad s,t\in[0,T]. \end{equation}

Furthermore, according to [Reference Azmoodeh, Sottinen, Viitasaari and Yazigi5, Corollary 3], the class of all Gaussian processes on [0, T], $T\in(0,\infty)$ , with Hölder modifications of any order $\lambda\in(0,H)$ consists exclusively of Gaussian Fredholm processes

\begin{equation*} Z(t) = \int_0^T \mathcal K(t,s) dB(s), \quad t\in[0,T], \end{equation*}

with $B = \{B(t),\,t\in[0,T]\}$ being some Brownian motion and $\mathcal K \in L^2([0,T]^2)$ satisfying, for all $\lambda\in(0,H)$ ,

\begin{equation*} \int_0^T |\mathcal K(t,u) - \mathcal K(s,u)|^2 du \le C_\lambda |t-s|^{2\lambda}, \quad s,t\in [0,T], \end{equation*}

where $C_\lambda \gt 0$ is some constant depending on $\lambda$ .

Finally, using Lemma 1.1, one can prove that the corresponding random variable $\Lambda$ can be chosen to have moments of all positive orders. Namely, assume that $\lambda \in (0,H)$ and take $p\ge 1$ such that $\frac{1}{p} \lt H - \lambda$ . If we take

(1.3) \begin{equation} \Lambda = A_{\lambda + \frac{1}{p}, p} \left(\int_0^T \int_0^T \frac{|Z(x) - Z(y)|^p}{|x-y|^{\lambda p + 2}} dx dy\right)^{\frac{1}{p}}, \end{equation}

then for any $r\ge 1$ ,

\begin{equation*} \mathbb E \Lambda^r \lt \infty, \end{equation*}

and for all $s,t\in[0,T]$ ,

\begin{equation*} |Z(t) - Z(s)| \le \Lambda |t-s|^\lambda; \end{equation*}

see e.g. [Reference Nualart and Rascanu31, Lemma 7.4] for fractional Brownian motion or [Reference Azmoodeh, Sottinen, Viitasaari and Yazigi5, Theorem 1] for the general Gaussian case.

In particular, the condition (1.2) presented in Example 1.1 is satisfied by the following stochastic process.

Example 1.2. The fractional Brownian motion $B^H = \{B^H(t),\,t\ge0\}$ with $H\in(0,1)$ (see e.g. [Reference Nourdin30]) satisfies (1.2), since

\begin{equation*} \left(\mathbb E |B^H(t) - B^H(s)|^2 \right)^{\frac{1}{2}} = |t - s|^{H} \le T^{H-\lambda} |t-s|^{\lambda}; \end{equation*}

i.e. $B^H$ has a modification with Hölder continuous paths of any order $\lambda\in(0,H)$ .

In order to proceed to the next example, we first need to introduce a corollary of the well-known Garsia–Rodemich–Rumsey inequality (see [Reference Garsia, Rodemich and Rumsey20] for more details).

Lemma 1.1. Let f: $[0,T]\to\mathbb R$ be a continuous function, $p \ge 1$ , and $\alpha \gt \frac{1}{p}$ . Then for all $t,s \in [0,T]$ one has

\begin{equation*} |f(t) - f(s)| \le A_{\alpha,p} |t-s|^{\alpha - \frac{1}{p}} \left(\int_0^T \int_0^T \frac{|f(x) - f(y)|^p}{|x-y|^{\alpha p + 1}} dx dy\right)^{\frac{1}{p}}, \end{equation*}

with the convention $0/0 = 0$ , where

(1.4) \begin{equation} A_{\alpha, p} = 2^{3 + \frac{2}{p}}\left( \frac{\alpha p + 1}{\alpha p - 1} \right). \end{equation}

Proof. The proof can be easily obtained from [Reference Garsia, Rodemich and Rumsey20, Lemma 1.1] by putting in the notation of [Reference Garsia, Rodemich and Rumsey20] $\Psi(u)\,:\!=\, |u|^\beta$ and $p(u)\,:\!=\, |u|^{\alpha+\frac{1}{\beta}}$ , where $\beta = p \ge 1$ in our statement.

Example 1.3. (Non-Gaussian continuous martingales.) Denote by $B = \{B(t),\,t\in[0,T]\}$ a standard Brownian motion and $\sigma = \{\sigma(t),\,t\in[0,T]\}$ an Itô integrable process such that, for all $\beta \gt 0$ ,

(1.5) \begin{equation} \sup_{u\in[0,T]} \mathbb E \sigma^{2 + 2\beta}(u) \lt \infty. \end{equation}

Define

\begin{equation*} Z(t) \,:\!=\, \int_0^t \sigma(u) dB(u), \quad t\in [0,T]. \end{equation*}

Then, by the Burkholder–Davis–Gundy inequality, for any $0 \le s\lt t \le T$ and any $\beta \gt 0$ ,

(1.6) \begin{equation} \begin{aligned} \mathbb E |Z(t) - Z(s)|^{2+2\beta} &\le C_\beta \mathbb E\left[\left(\int_s^t \sigma^2(u) du\right)^{1+\beta}\right] \le C_\beta (t-s)^{\beta} \int_s^t \mathbb E \sigma^{2 + 2\beta}(u) du \\ &\le C_\beta \sup_{u\in[0,T]} \mathbb E \sigma^{2 + 2\beta}(u) (t-s)^{1 + \beta}. \end{aligned} \end{equation}

Therefore, by the Kolmogorov continuity theorem and an arbitrary choice of $\beta$ , Z has a modification that is $\lambda$ -Hölder continuous of any order $\lambda \in \left(0, \frac{1}{2}\right)$ .

Next, for an arbitrary $\lambda \in \left(0, \frac{1}{2}\right)$ , choose $p\ge 1$ such that $\lambda + \frac{1}{p} \lt \frac{1}{2}$ and put

\begin{equation*} \Lambda \,:\!=\, A_{\lambda + \frac{1}{p}, p} \left(\int_0^T \int_0^T \frac{|Z(x) - Z(y)|^p}{|x-y|^{\lambda p + 2}} dx dy\right)^{\frac{1}{p}}, \end{equation*}

where $A_{\lambda + \frac{1}{p}, p}$ is defined by (1.4). By the Burkholder–Davis–Gundy inequality, for any $r>p$ , we obtain

\begin{equation*} \mathbb E |Z(t) - Z(s)|^{r} \le |t-s|^{\frac{r}{2}} C_r \sup_{u\in[0,T]} \mathbb E \sigma^{r}(u) , \quad s,t\in[0,T]. \end{equation*}

Hence, using Lemma 1.1 and the Minkowski integral inequality, we have

\begin{align*} \left( \mathbb E \Lambda^r \right)^{\frac{p}{r}} &= A^p_{\lambda + \frac{1}{p}, p} \left(\mathbb E\left[ \left( \int_0^T \int_0^T \frac{|Z(u) - Z(v)|^p}{|u-v|^{\lambda p + 2}} du dv \right)^{\frac{r}{p}}\right]\right)^{\frac{p}{r}} \\ &\le A^p_{\lambda + \frac{1}{p}, p}\int_0^T \int_0^T \frac{\left(\mathbb E [|Z(u) - Z(v)|^r]\right)^{ \frac{p}{r} }}{|u-v|^{\lambda p + 2}} du dv \\ &\le A^p_{\lambda + \frac{1}{p}, p} C_r^{\frac p r} \left(\sup_{t\in[0,T]} \mathbb E \sigma^{r}Y(t) \right)^{\frac p r} \int_0^T \int_0^T |u-v|^{\frac{p}{2}-\lambda p - 2} du dv \lt \infty, \end{align*}

since $\frac{p}{2}-\lambda p - 2 \gt -1$ ; i.e. $\mathbb E\Lambda^r \lt \infty$ for all $r>0$ . Note that the condition (1.5) can actually be relaxed (see e.g. [Reference Boguslavskaya, Mishura, Shevchenko, Silvestrov, Malyarenko, Rančić and Springer7, Lemma 14.2]).

Next, let us proceed to the drift b and initial value Y(0). Let $\varphi$ : $[0,T] \to \mathbb R$ be a $\lambda$ -Hölder continuous function, where $\lambda\in(0,1)$ is the same as in Assumption (Z2), i.e. there exists a constant $K = K_{\lambda}$ such that

\begin{equation*} |\varphi(t) - \varphi(s)| \le K |t-s|^{\lambda}, \quad t,s\in[0,T],\end{equation*}

and for an arbitrary $a_1 \in \mathbb R$ , define

(1.7) \begin{equation} \mathcal D_{a_1} \,:\!=\, \{(t,y)\,|\,t\in[0,T], y\in (\varphi(t)+ a_1, \infty)\}.\end{equation}

Assumption 1.2. The initial value $Y(0) \gt \varphi(0)$ is deterministic, and the drift b satisfies the following assumptions:

  1. (A1) b: $\mathcal D_{0} \to \mathbb R$ is continuous;

  2. (A2) for any $\varepsilon>0$ there is a constant $c_{\varepsilon} \gt 0$ such that for any $(t,y_1), (t, y_2) \in \mathcal D_{\varepsilon}$ ,

    \begin{equation*} |b(t,y_1) - b(t, y_2)| \le c_{\varepsilon} |y_1 - y_2|; \end{equation*}
  3. (A3) there are positive constants $y_*$ , c, and $\gamma$ such that for all $(t,y) \in \mathcal D_0 \setminus \mathcal D_{y_*}$ ,

    \begin{equation*} b(t,y) \ge \frac{c}{\left(y- \varphi(t)\right)^\gamma}; \end{equation*}
  4. (A4) the constant $\gamma$ from Assumption (A3) satisfies the condition

    \begin{equation*} \gamma \gt \frac{1-\lambda}{\lambda}, \end{equation*}
    with $\lambda$ being the order of Hölder continuity of $\varphi$ and paths of Z.

Example 1.4. Let $\alpha_1$ : $[0,T] \to (0,\infty)$ be an arbitrary continuous function, and let $\alpha_2$ : $\mathcal D_0 \to \mathbb R$ be such that

\begin{equation*} |\alpha_2(t,y_1) - \alpha_2(t,y_2)| \le C|y_1-y_2|, \quad (t,y_1),(t,y_2)\in \mathcal D_0, \end{equation*}

for some constant $C>0$ . Then

\begin{equation*} b(t,y) \,:\!=\, \frac{\alpha_1(t)}{(y - \varphi(t))^\gamma} - \alpha_2(t, y), \quad (t,y) \in \mathcal D_0, \end{equation*}

satisfies Assumptions (A1)(A4) (provided that $\gamma \gt \frac{1 - \lambda}{\lambda}$ ).

We finalize this initial section with a simple yet useful comparison-type result that will be required in what follows.

Lemma 1.2. Assume that continuous processes $\{X_1 (t),\,t\ge 0\}$ and $\{X_2(t), t\ge 0\}$ satisfy (a.s.) the equations of the form

\begin{equation*} X_i(t) = X(0) + \int_0^t f_i (s, X_i(s)) ds + Z(t), \quad t\ge 0, \quad i = 1,2, \end{equation*}

where X(0) is a constant and $f_1$ , $f_2$ : $[0,\infty)\times \mathbb R \to \mathbb R$ are continuous functions such that for any $(t,x) \in [0,\infty)\times \mathbb R$ ,

\begin{equation*} f_1 (t, x) \lt f_2 (t,x). \end{equation*}

Then $X_1(t) \lt X_2(t)$ a.s. for any $t \gt 0$ .

Proof. The proof is straightforward. Define

\begin{equation*}\Delta(t) \,:\!=\, X_2(t) - X_1(t) = \int_0^t \left(f_2(s, X_2(s)) - f_1(s, X_1(s))\right)ds, \quad t\ge 0,\end{equation*}

and observe that $\Delta(0) = 0$ and that the function $\Delta$ is differentiable with

\begin{equation*}\Delta ^{\prime}_+ (0) = f_2(0, X(0)) - f_1(0, X(0)) \gt 0.\end{equation*}

It is clear that $\Delta(t) = \Delta ^{\prime}_+ (0) t + o(t)$ , $t\to 0+$ , whence there exists the maximal interval $(0, t^*) \subset (0,\infty)$ such that $\Delta(t) \gt 0$ for all $t\in(0,t^*)$ . It is also clear that

\begin{equation*}t^* = \sup\{t>0\,|\,\forall s \in (0,t): \Delta(s) \gt0 \}.\end{equation*}

Assume that $t^* \lt \infty$ . By the definition of $t^*$ and continuity of $\Delta$ , $\Delta(t^*) = 0$ . Hence $X_1(t^*) = X_2(t^*) = X^*$ and

\begin{equation*}\Delta ^{\prime} (t^*) = f_2(t^*, X^*) - f_1(t^*, X^*) \gt 0.\end{equation*}

As $\Delta (t) = \Delta^{\prime}(t^*) (t- t^*) + o(t-t^*)$ , $t \to t^*$ , there exists such $\varepsilon \gt0$ that $\Delta(t)\lt 0$ for all $t \in (t^* - \varepsilon, t^*)$ , which contradicts the definition of $t^*$ . Therefore $t^* = \infty$ , and for all $t>0$ ,

\begin{equation*}X_1(t) \lt X_2(t).\end{equation*}

2. One-sided sandwich SDE

In this section, we discuss existence, uniqueness and properties of the solution of (0.3) under Assumptions (A1)(A4). First, we demonstrate that (A1)(A3) ensure the existence and uniqueness of the solution to (0.3) until the first moment of hitting the lower bound $\{\varphi(t),\,t\in[0,T]\}$ . We then prove that (A4) guarantees that the solution exists on the entire interval [0, T], since it always stays above $\varphi(t)$ . The latter property justifies the name one-sided sandwich in the section title. Finally, we derive additional properties of the solution, still in terms of some form of bounds.

Remark 2.1. Throughout this paper, the pathwise approach will be used; i.e. we fix a Hölder continuous trajectory of Z in most proofs. For simplicity, we omit the $\omega$ in brackets in what follows.

2.1. Existence and uniqueness result

As mentioned before, we shall start from the existence and uniqueness of the local solution.

Theorem 2.1. Let Assumptions (A1)(A3) hold. Then the SDE (0.3) has a unique local solution in the following sense: there exists a continuous process $Y = \{Y(t),\,t\in [0,T]\}$ such that

\begin{equation*} Y(t) = Y(0) + \int_0^t b(s, Y(s)) ds + Z(t), \quad \forall t\in[0,\tau_0], \end{equation*}

with

\begin{align*} \tau_0 :&= \sup\{t\in[0,T]\,|\,\forall s \in [0,t): Y(s) \gt \varphi(s)\} \\ &=\inf\{t\in[0,T]\,|\,Y(t) = \varphi(t)\} \wedge T. \end{align*}

Furthermore, if Y’ is another process satisfying Equation (0.3) on any interval $[0,t] \subset [0,\tau^{\prime}_0)$ , where

\begin{equation*} \tau^{\prime}_0 \,:\!=\, \sup\{s\in [0,T]\,|\,\forall u \in [0,s): Y^{\prime}(u) \gt \varphi(s)\}, \end{equation*}

then $\tau_0 = \tau^{\prime}_0$ and $Y(t) = Y^{\prime}(t)$ for all $t\in[0,\tau_0)$ .

Proof. For a fixed $\varepsilon\in (0, Y(0) - \varphi(0))$ , define for $(t,y) \in [0,T] \times \mathbb R$

\begin{equation*} \widetilde b_\varepsilon(t,y) \,:\!=\, \begin{cases} b(t,y), &\quad (t,y)\in\mathcal D_\varepsilon, \\[4pt] b(t, \varphi(t) + \varepsilon), &\quad (t,y) \notin \mathcal D_\varepsilon. \end{cases} \end{equation*}

Note that $\widetilde b_\varepsilon$ is continuous and globally Lipschitz with respect to the second variable, and hence the SDE

\begin{equation*} \widetilde Y_\varepsilon (t) = Y(0) + \int_0^t \widetilde b_\varepsilon\big(s,\widetilde Y_\varepsilon (s)\big) ds + Z(t), \quad t\in [0,T], \end{equation*}

has a unique solution. Define

\begin{equation*} \tau_\varepsilon \,:\!=\, \inf\{t \in [0,T]\,|\,\widetilde Y_\varepsilon (t) = \varphi(t) + \varepsilon\} \wedge T. \end{equation*}

By the definition of $\tau_\varepsilon$ , for all $t\in [0,\tau_\varepsilon)$ we have $(t, \widetilde Y_\varepsilon (t)) \in \mathcal D_\varepsilon$ . This means that for all $t\in [0,\tau_\varepsilon]$ ,

\begin{equation*} \widetilde Y_\varepsilon (t) = Y(0) + \int_0^t b(s, \widetilde Y_\varepsilon(s))ds + Z(t); \end{equation*}

i.e. $\widetilde Y_\varepsilon$ is a solution to (0.3) on $[0,\tau_\varepsilon)$ .

Conversely, let $\widetilde Y_\varepsilon ^{\prime}$ be a solution to (0.3). Define

\begin{equation*} \tau^{\prime}_\varepsilon \,:\!=\, \inf\{t \in [0,T]\,|\,\widetilde Y^{\prime}_\varepsilon (t) = \varphi(t) + \varepsilon\} \wedge T \end{equation*}

and observe that for all $t \in [0,\tau^{\prime}_\varepsilon]$ ,

\begin{equation*} \widetilde Y^{\prime}_\varepsilon (t) = Y(0) + \int_0^t \widetilde b_\varepsilon\big(s,\widetilde Y^{\prime}_\varepsilon (s)\big) ds + Z(t), \quad t\in [0,T], \end{equation*}

which, by uniqueness of $\widetilde Y_\varepsilon$ , implies that $\tau^{\prime}_\varepsilon = \tau_\varepsilon$ and $\widetilde Y^{\prime}_\varepsilon = \widetilde Y_\varepsilon$ on $[0,\tau_\varepsilon]$ . Since the choice of $\varepsilon$ is arbitrary, we get the required result.

Theorem 2.1 shows that Equation (0.3) has a unique solution until the latter stays above $\{\varphi(t), t\in [0,T]\}$ . However, an additional condition (A4) on the constant $\gamma$ from Assumption (A3) allows us to ensure that the corresponding process Y always stays above $\varphi$ . More precisely, we have the following result.

Theorem 2.2. Let Assumptions (A1)(A4) hold. Then (0.3) has a unique solution $Y = \{Y(t),\,t\in[0,T]\}$ such that

\begin{equation*} Y(t) \gt \varphi(t), \quad t\in [0,T]. \end{equation*}

Proof. Let Y be the local solution to (0.3) discussed in Theorem 2.1, and assume that $\tau \,:\!=\, \inf\{t \in[0,T]\,|\, Y(t) = \varphi(t)\} \in [0,T]$ . For any $\varepsilon \lt \min\left\{y_*, Y(0) - \varphi(0)\right\}$ , where $y_*$ is from Assumption (A3), consider

\begin{equation*} \tau_\varepsilon \,:\!=\, \sup\{t\in[0,\tau]\,|\,Y(t) = \varphi(t) + \varepsilon\}. \end{equation*}

By the definitions of $\tau$ and $\tau_\varepsilon$ ,

\begin{equation*} \varphi(\tau) - \varphi(\tau_\varepsilon) -\varepsilon = Y(\tau) - Y(\tau_\varepsilon) = \int^{\tau}_{\tau_\varepsilon} b(s, Y(s)) ds + Z(\tau) - Z(\tau_\varepsilon). \end{equation*}

Moreover, for all $t\in[\tau_\varepsilon, \tau)$ , we have $(t,Y(t)) \in \mathcal D_{0} \setminus \mathcal D_{\varepsilon}$ , so, using the fact that $\varepsilon \lt y_*$ and Assumption (A3), we obtain that for $t\in[\tau_\varepsilon, \tau)$ ,

(2.1) \begin{equation} b(t, Y(t)) \ge \frac{c}{(Y(t) - \varphi(t))^\gamma} \ge \frac{c}{\varepsilon^\gamma}. \end{equation}

Finally, by the Hölder continuity of $\varphi$ and Z,

\begin{equation*} -(Z({\tau}) - Z({\tau_\varepsilon})) + (\varphi(\tau) - \varphi(\tau_\varepsilon)) \le (\Lambda + K) (\tau - \tau_\varepsilon)^{\lambda} =: \bar\Lambda (\tau - \tau_\varepsilon)^{\lambda}. \end{equation*}

Therefore, taking into account all of the above, we get

\begin{equation*} \bar\Lambda (\tau - \tau_\varepsilon)^{\lambda } \ge \int^{\tau}_{\tau_\varepsilon} \frac{c}{\varepsilon^\gamma} ds + \varepsilon = \frac{c(\tau - \tau_\varepsilon)}{\varepsilon^\gamma} + \varepsilon, \end{equation*}

i.e.

(2.2) \begin{equation} \frac{c(\tau - \tau_\varepsilon)}{\varepsilon^\gamma} - \bar\Lambda (\tau - \tau_\varepsilon)^{\lambda} + \varepsilon \le 0. \end{equation}

Now consider the function $F_\varepsilon$ : $\mathbb R^+ \to \mathbb R$ such that

\begin{equation*} F_\varepsilon (t) = \frac{c}{\varepsilon^\gamma} t - \bar\Lambda t^{\lambda} + \varepsilon. \end{equation*}

According to (2.2), $F_\varepsilon(\tau - \tau_\varepsilon) \le 0$ for any $0 \lt \varepsilon \lt \min\left\{y_*, Y(0) - \varphi(0)\right\}$ . It is easy to verify that $F_\varepsilon$ attains its minimum at the point

\begin{equation*} t^* = \left(\frac{ \lambda \bar\Lambda}{c}\right)^{\frac{1}{1-\lambda}}\varepsilon^{\frac{\gamma}{1-\lambda}} \end{equation*}

and

\begin{equation*} F_\varepsilon (t^*) =\varepsilon - D \bar \Lambda^{\frac{1}{1 - \lambda}} \varepsilon^{\frac{\gamma\lambda}{1-\lambda}}, \end{equation*}

where

\begin{equation*}D \,:\!=\,\left(\frac{1}{c}\right)^{\frac{\lambda}{1 - \lambda}} \left( \lambda^{\frac{\lambda}{1 - \alpha}} - \lambda^{\frac{1}{1 - \lambda}} \right) \gt 0.\end{equation*}

Note that, by (A4), we have $\frac{\gamma\lambda}{1-\lambda}>1$ . Hence it is easy to verify that there exists $\varepsilon^*$ such that for all $\varepsilon \lt \varepsilon^*$ , $F_\varepsilon (t^*)>0$ , which contradicts (2.2). Therefore, $\tau$ cannot belong to [0, T], and Y exceeds $\varphi$ .

Remark 2.2.

  1. 1. The result above can be generalized to the case of infinite time horizon in a straightforward manner. For this, it is sufficient to assume that $\varphi$ is locally $\lambda$ -Hölder continuous; Z has locally Hölder continuous paths, i.e. for each $T>0$ there exist a constant $K_T \gt0$ and random variable $\Lambda = \Lambda_T(\omega) \gt 0$ such that

    \begin{equation*}|\varphi(t) - \varphi(s)| \le K_T|t-s|^\lambda, \quad |Z(t) - Z(s)| \le \Lambda_T |t-s|^\lambda, \quad t,s\in[0,T];\end{equation*}
    and Assumptions (A1)(A4) hold on [0, T] for any $T>0$ (in this case, the constants $c_\varepsilon$ , $y_*$ , and c from the corresponding assumptions are allowed to depend on T).
  2. 2. Since all the proofs above are based on pathwise calculus, it is possible to extend the results to stochastic $\varphi$ and Y(0) (provided that $Y(0) \gt \varphi(0)$ ).

2.2. Upper and lower bounds for the solution

As we have seen in the previous subsection, each random variable Y(t), $t\in[0,T]$ , is a priori lower-sandwiched by the deterministic value $\varphi(t)$ (under Assumptions (A1)(A4)). In this subsection, we derive additional bounds from above and below for Y(t) in terms of the random variable $\Lambda$ characterizing the noise from (1.1). Furthermore, such bounds allow us to establish the existence of moments of Y of all orders, including the negative ones.

Theorem 2.3. Let Assumptions (A1)(A4) hold, and let $\Lambda$ be the random variable such that

\begin{equation*} |Z(t) - Z(s)| \le \Lambda |t-s|^\lambda, \quad t,s\in[0,T]. \end{equation*}

Then, for any $r>0$ , the following hold:

  1. 1. There exist positive deterministic constants $M_1(r, T)$ and $M_2(r, T)$ such that

    \begin{equation*} |Y(t)|^r \le M_1(r, T) + M_2(r, T) \Lambda^r, \quad t\in [0,T]. \end{equation*}
  2. 2. Additionally, if $\Lambda$ can be chosen in such a way that $\mathbb E \Lambda^r \lt \infty$ , then

    \begin{equation*} \mathbb E \left[ \sup_{t\in [0,T]} |Y(t)|^r \right] \lt \infty. \end{equation*}

Proof. It is enough to prove item 1 for $r=1$ , as the rest of the theorem will then become clear. Define $\eta \,:\!=\, \frac{Y(0) - \varphi(0)}{2}$ and let

\begin{equation*} \tau_1 \,:\!=\, \sup\left\{ s\in [0,T]\,|\, \forall u \in [0,s]: Y(u) \ge \varphi(u) + \eta\right\}. \end{equation*}

Our initial goal is to prove the inequality of the form

(2.3) \begin{equation} \begin{aligned} \left|Y(t)\right| \le |Y(0)| + T A_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda + \max_{u\in[0,T]}|\varphi(u)| + \eta, \end{aligned} \end{equation}

where

\begin{equation*} A_T \,:\!=\, c_\eta\left(1+ \max_{u\in[0,T]} |\varphi(u)| + \eta \right) + \max_{u \in[0,T]}\left|b\left(u, \varphi(u) + \eta\right)\right| \end{equation*}

and $c_\eta$ is from Assumption (A2). Let us get (2.3) by considering the cases $t\le \tau_1$ and $t \gt \tau_1$ separately.

Case $\boldsymbol{t\le \tau_1}$ . For any $s\in[0,t]$ , we have $(s, Y(s)) \in \mathcal D_{\eta}$ , and therefore, by Assumption (A2), for all $s\in [0,t]$ ,

\begin{equation*} \left|b(s, Y(s)) - b\left(s, \varphi(s) + \eta\right)\right| \le c_\eta \left|Y(s) - \varphi(s) - \eta\right|. \end{equation*}

Hence

\begin{align*} |b(s,Y(s))| &\le c_\eta |Y(s)| + c_\eta\left(\max_{u\in[0,T]} |\varphi(u)| + \eta \right) + \max_{u \in[0,T]}\left|b\left(u, \varphi(u) + \eta\right)\right| \\ &\le A_T (1 + |Y(s)|). \end{align*}

Therefore, taking into account that $|Z(t)| \le \Lambda T^\lambda$ , we have

\begin{align*} \left|Y(t)\right| & = \left|Y(0) + \int_0^t b(s, Y(s)) ds + Z(t)\right| \\ &\le |Y(0)| + \int_0^t |b(s, Y(s))|ds + |Z(t)| \\ &\le |Y(0)| + TA_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda \\ &\le |Y(0)| + T A_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda + \max_{u\in[0,T]}|\varphi(u)| + \eta. \end{align*}

Case $\boldsymbol{t \gt \tau_1}$ . From the definition of $\tau_1$ and continuity of Y, $Y(\tau_1) = \eta$ . Furthermore, since $Y(s) \gt \varphi(s)$ for all $s \ge 0$ , we can consider

\begin{equation*} \tau_2(t) \,:\!=\, \sup\left\{s \in (\tau_1, t]\,|\,Y(s) \lt \varphi(s) + \eta\right\}. \end{equation*}

Note that $\left|Y(\tau_2(t))\right| \le \max_{u\in[0,T]}|\varphi(u)| + \eta$ , so

(2.4) \begin{equation} \begin{aligned} \left| Y(t) \right| &\le \left| Y(t) - Y(\tau_2(t)) \right| + \left|Y(\tau_2(t))\right| \\ & \le \left| Y(t) - Y({\tau_2(t)}) \right| + \max_{u\in[0,T]}|\varphi(u)| + \eta. \end{aligned} \end{equation}

If $\tau_2(t) \lt t$ , we have that $(s,Y(s)) \in \mathcal D_{\eta}$ for all $s \in [\tau_2(t), t]$ ; therefore, similarly to Step 1,

\begin{equation*} |b(s, Y(s))| \le A_T(1 + |Y(s)|), \end{equation*}

so

\begin{equation*} \begin{aligned} \left| Y(t) - Y({\tau_2(t)}) \right| &= \left|\int_{\tau_2(t)}^t b(s,Y(s))ds + (Z(t) - Z({\tau_2(t)}))\right| \\ &\le \int_{\tau_2(t)}^t |b(s,Y(s))|ds + |Z(t) - Z({\tau_2(t)})| \\ &\le T A_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda, \end{aligned} \end{equation*}

whence, taking into account (2.4), we have

(2.5) \begin{align} \left| Y(t) \right| &\le T A_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda + \max_{u\in[0,T]}|\varphi(u)| + \eta \nonumber\\ &\le |Y(0)| + T A_T + A_T \int_0^t |Y(s)| ds + \Lambda T^\lambda + \max_{u\in[0,T]}|\varphi(u)| + \eta. \end{align}

Now, when we have seen that (2.3) holds for any $t\in [0,T]$ , we apply Gronwall’s inequality to get

\begin{equation*} \begin{aligned} |Y(t)| &\le \left(|Y(0)| + T A_T + \Lambda T^\lambda + \max_{u\in[0,T]}|\varphi(u)| + \eta\right)e^{TA_T} \\ &=: M_1(1, T) + M_2(1, T) \Lambda, \end{aligned} \end{equation*}

where

\begin{align*} M_1(1, T) &\,:\!=\, \left(|Y(0)| + T A_T + \max_{u\in[0,T]}|\varphi(u)| + \frac{Y(0) - \varphi(0)}{2}\right)e^{TA_T}, \\ M_2(1, T) &\,:\!=\, T^\lambda e^{TA_T}. \end{align*}

Theorem 2.4. Let Assumptions (A1)(A4) hold, and let $\Lambda$ be the random variable such that

\begin{equation*} |Z(t) - Z(s)| \le \Lambda |t-s|^\lambda, \quad t,s\in[0,T]. \end{equation*}

Then, for any $r>0$ , the following hold:

  1. 1. There exists a constant $M_3(r,T) \gt0$ , depending only on r, T, $\lambda$ , $\gamma$ , and the constant c from Assumption (A3), such that for all $t\in[0,T]$ ,

    (2.6) \begin{equation} (Y(t) - \varphi(t))^{-r} \le M_3(r,T) { \widetilde \Lambda ^{\frac{r }{\gamma \lambda + \lambda -1}} }, \end{equation}
    where
    \begin{equation*} \widetilde\Lambda \,:\!=\, \max\left\{ \Lambda, K, \left(2 \beta\right)^{\lambda - 1} \left(\frac{(Y(0) - \varphi(0)) \wedge y_*}{2}\right)^{1 - \lambda - \gamma\lambda} \right\} \end{equation*}
    with
    \begin{equation*} \beta \,:\!=\, \frac{ \lambda^{\frac{\lambda}{1 - \lambda}} - \lambda^{\frac{1}{1 - \lambda}} }{ c ^{\frac{\lambda}{1 - \lambda}}} \gt 0. \end{equation*}
  2. 2. Additionally, if $\Lambda$ can be chosen in such a way that $\mathbb E \Lambda^{\frac{r}{\gamma\lambda + \lambda - 1}} \lt \infty$ , then

    \begin{equation*} \mathbb E \left[ \sup_{t\in [0,T]} (Y(t) - \varphi(t))^{-r} \right] \lt \infty. \end{equation*}

Proof. Just as in Theorem 2.3, it is enough to prove that there exists a constant $L>0$ that depends only on T, $\lambda$ , $\gamma$ , and the constant c from Assumption (A3) such that for all $t\in[0,T]$ ,

\begin{equation*} Y(t) - \varphi(t) \ge \frac{L}{\widetilde \Lambda^{\frac{1}{\gamma\lambda + \lambda -1}}}; \end{equation*}

then the rest of the theorem will follow.

Put

\begin{equation*} \varepsilon = \varepsilon(\omega) \,:\!=\, \frac{1}{(2 \beta )^{\frac{1 - \lambda}{\gamma \lambda + \lambda -1}} \widetilde \Lambda ^{\frac{1 }{\gamma \lambda + \lambda -1}}}. \end{equation*}

Note that $\widetilde \Lambda$ is chosen in such a way that

\begin{equation*} |\varphi(t) - \varphi(s)| + |Z(t) - Z(s)| \le \widetilde \Lambda |t-s|^{\lambda}, \quad t, s \in[0,T], \end{equation*}

and furthermore, $\varepsilon \lt Y(0) - \varphi(0)$ and $\varepsilon \lt y_*$ . Fix an arbitrary $t\in[0,T]$ . If $Y(t) -\varphi(t) \gt \varepsilon$ , then, by definition of $\varepsilon$ , an estimate of the type (2.6) holds automatically. If $Y(t) - \varphi(t) \lt \varepsilon$ , then, since $Y(0) - \varphi(0) \gt \varepsilon$ , one can define

\begin{equation*} \tau(t) \,:\!=\, \sup\{s\in[0,t]\,|\,Y(s) - \varphi(s) = \varepsilon\}. \end{equation*}

Since $Y(s) - \varphi(s) \le \varepsilon \lt y_*$ for all $s\in[\tau(t), t]$ , one can apply Assumption (A3) and write

\begin{align*} Y(t) - \varphi(t) &= Y({\tau(t)}) - \varphi(t) + \int_{\tau(t)}^t b(s, Y(s)) ds + Z(t) - Z({\tau(t)}) \\ &= \varepsilon + \varphi(\tau(t)) - \varphi(t) + \int_{\tau(t)}^t b(s, Y(s)) ds + Z(t) - Z({\tau(t)}) \\ &\ge \varepsilon + \frac{ c}{\varepsilon^{\gamma}}(t - \tau(t)) - \widetilde \Lambda (t - \tau(t))^{\lambda}. \end{align*}

Consider the function $F_{\varepsilon} : \mathbb R_+ \to \mathbb R$ such that

\begin{equation*} F_{\varepsilon} (x) = \varepsilon + \frac{ c}{\varepsilon^{\gamma}} x - \widetilde \Lambda x^{\lambda}. \end{equation*}

It is straightforward to verify that $F_\varepsilon$ attains its minimum at

\begin{equation*} x_* \,:\!=\, \left(\frac{\lambda}{ c}\right)^{\frac{1}{1 - \lambda}} \varepsilon^{\frac{\gamma}{1 - \lambda}} \widetilde \Lambda^{\frac{1}{1 - \lambda}}, \end{equation*}

and, taking into account the explicit form of $\varepsilon$ ,

\begin{align*} F_\varepsilon(x_*) &= \varepsilon+ \frac{\lambda^{\frac{1}{1 - \lambda}}}{ c^{\frac{\lambda}{1 - \lambda}}} \varepsilon^{\frac{\gamma \lambda}{1 - \lambda}} \widetilde \Lambda^{\frac{1}{1 - \lambda}} - \frac{\lambda^{\frac{\lambda}{1 - \lambda}}}{c^{\frac{\lambda}{1 - \lambda}}} \varepsilon^{\frac{\gamma \lambda}{1 - \lambda}} \widetilde \Lambda^{\frac{1}{1 - \lambda}} \\ &= \varepsilon - \beta \varepsilon^{\frac{\gamma \lambda}{1 - \lambda}} \widetilde \Lambda^{\frac{1}{1 - \lambda}} \\ &= \frac{1}{2^{\frac{\gamma\lambda }{\gamma \lambda + \lambda -1}} \beta^{\frac{1 - \lambda}{\gamma \lambda + \lambda -1}} \widetilde \Lambda ^{\frac{1 }{\gamma \lambda + \lambda -1}}} \\ & = \frac{\varepsilon}{2}; \end{align*}

i.e., if $Y(t) \lt \varphi(t) + \varepsilon$ , we have that

\begin{equation*} Y(t) - \varphi(t) \ge F_{\varepsilon} ( t - \tau(t) ) \ge F_\varepsilon(x_*) = \frac{\varepsilon}{2}, \end{equation*}

and thus for any $t\in[0,T]$

\begin{equation*} Y(t) \ge \varphi(t) + \frac{\varepsilon}{2} = \varphi(t) + \frac{1}{2^{\frac{\gamma\lambda }{\gamma \lambda + \lambda -1}} \beta^{\frac{1 - \lambda}{\gamma \lambda + \lambda -1}} \widetilde \Lambda ^{\frac{1 }{\gamma \lambda + \lambda -1}}} =: \frac{L}{\widetilde \Lambda^{\frac{1 }{\gamma \lambda + \lambda -1}}}, \end{equation*}

where

\begin{equation*}L \,:\!=\, \frac{1}{2^{\frac{\gamma\lambda }{\gamma \lambda + \lambda -1}} \beta^{\frac{1 - \lambda}{\gamma \lambda + \lambda -1}} }.\end{equation*}

This completes the proof.

Remark 2.3. As one can see, the existence of moments for Y comes down to the existence of moments for $\Lambda$ . Note that the noises given in Examples 1.1 and 1.3 fit into this framework.

Remark 2.4. The constant $M_3(r,T)$ from Theorem 2.4 can be explicitly written as

\begin{equation*} M_3(r,T) = 2^{\frac{r\gamma\lambda }{\gamma \lambda + \lambda -1}} \beta^{\frac{r(1 - \lambda)}{\gamma \lambda + \lambda -1}}. \end{equation*}

3. Connection to Skorokhod reflections

We have seen that, under Assumptions (Z1)(Z2) and (A1)(A4), the solution Y to (0.3) exceeds $\varphi$ . Note that since the inequality in (0.4) is strict, Y is not a reflected process in the sense of Skorokhod (see e.g. the seminal paper [Reference Skorokhod33] for more details). However, it is still possible to establish a connection between reflected processes and a certain class of sandwiched processes.

For any $\varepsilon \gt 0$ , consider an SDE of the form

(3.1) \begin{equation} Y_\varepsilon(t) = Y(0) + \int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds - \int_0^t \alpha(s, Y_\varepsilon(s))ds + Z(t),\end{equation}

where $Z = \{Z(t),\,t\in [0,T]\}$ is a stochastic process satisfying Assumptions (Z1)(Z2); $\gamma \gt \frac{1-\lambda}{\lambda}$ , where $\lambda$ is the order of Hölder continuity of $\varphi$ and the paths of Z; $Y(0) \gt \varphi(0)$ ; and $\alpha$ : $[0,T]\times \mathbb R \to \mathbb R$ is a continuous function such that

(3.2) \begin{equation} |\alpha(t,y_1) - \alpha(t,y_2)| \le c|y_1 - y_2|, \quad y_1,y_2\in\mathbb R,\end{equation}

for some constant $c>0$ . It is clear that the drift $b_\varepsilon(t,y) \,:\!=\, \frac{\varepsilon}{(y - \varphi(t))^\gamma} - \alpha(t, y)$ satisfies Assumptions (A1)(A4), and hence there exists a unique solution $Y_\varepsilon$ to (3.1) and $Y_\varepsilon(t) \gt \varphi(t)$ , $t\in[0,T]$ .

Next, consider the Skorokhod reflection problem of the form

(3.3) \begin{equation} Y_0(t) = Y(0) - \int_0^t \alpha(s,Y_0(s))ds + Z(t) + L_0(t),\end{equation}

where the process $L_0 = \{L_0(t),\,t\in[0,T]\}$ is called the $\varphi$ -reflection function (or $\varphi$ -regulator) for $Y_0$ and is defined as follows:

  1. (i) $L_0(0) = 0$ a.s.,

  2. (ii) $L_0$ is non-decreasing a.s.,

  3. (iii) $L_0$ is continuous a.s.,

  4. (iv) the points of growth for $L_0$ occur a.s. only at the points where $Y_0(t) - \varphi(t) = 0$ , and

  5. (v) $Y_0(t) \ge \varphi(t)$ , $t\in[0,T]$ , a.s.

Note that the solution to the Skorokhod reflection problem (3.3) is not just a stochastic process $Y_0$ but the pair $(Y_0, L_0)$ with $L_0$ being a $\varphi$ -reflection function for $Y_0$ . Regarding the problem (3.3), we have the following result.

Theorem 3.1. If a Skorokhod reflection problem (3.3) has a solution $(Y_0, L_0)$ , it is unique.

Proof. First note that, without loss of generality, we can put $\varphi \equiv 0$ . Indeed, let $(Y_0, L_0)$ be a solution to the Skorokhod reflection problem (3.3) with the lower boundary $\varphi$ . Then the process satisfies

(3.4) \begin{equation} Y^{\varphi}_0(t) = Y^\varphi(0) - \int_0^t \alpha^{\varphi}(s, Y^{\varphi}_0(s))ds + Z^\varphi(t) + L_0(t), \end{equation}

where $Y^\varphi(0) \,:\!=\, Y(0) - \varphi(0)$ , $\alpha^\varphi(t, y) \,:\!=\, \alpha(t, y + \varphi(t))$ , $Z^\varphi(t) \,:\!=\, Z(t) - (\varphi(t)-\varphi(0))$ . It is easy to check that $L_0$ is a 0-reflection function for $ Y^{\varphi}_0$ , i.e. $(Y^{\varphi}_0, L_0)$ is a solution to the Skorokhod reflection problem (3.4) with the lower boundary 0. Similar reasoning allows us to establish that the opposite is also true: if $(Y^{\varphi}_0, L_0)$ is a solution to (3.4), then $(Y_0 = Y^{\varphi}_0 + \varphi, L_0)$ is a solution to (3.3), and hence (3.3) has a solution if and only if (3.4) does; the uniqueness of the solution of one Skorokhod problem implies the uniqueness of the solution of the other. Therefore, in this proof we assume that $\varphi\equiv 0$ .

The rest of the proof essentially follows [Reference Skorokhod33, Reference Skorokhod34]. The only difference is that we have a general Hölder continuous noise Z instead of a classical Brownian motion, but the additive form of Z in (3.3) makes the arguments shorter.

Let $(Y_0, L_0)$ and $(Y^{\prime}_0, L^{\prime}_0)$ be two solutions to (3.3). Define

\begin{align*} \Delta^+(t) &\,:\!=\, \begin{cases} Y_0(t) - Y^{\prime}_0(t) &\quad \text{if }Y_0(t) - Y^{\prime}_0(t) \gt 0, \\ 0 &\quad \text{otherwise}, \end{cases} \\ \Delta^-(t) &\,:\!=\, \begin{cases} Y^{\prime}_0(t) - Y_0(t) &\quad \text{if }Y^{\prime}_0(t) - Y_0(t) \gt 0, \\ 0 &\quad \text{otherwise}. \end{cases} \end{align*}

By definition of a solution to Skorokhod reflection problem, both $\Delta^+$ and $\Delta^-$ are continuous with probability 1. Let

\begin{equation*} \tau(t) \,:\!=\, \sup\{s\in[0,t]\,|\,\Delta^+(s) = 0\}. \end{equation*}

If $\tau(t) \lt t$ , we have that for all $s\in(\tau(t), t]$

\begin{equation*} \Delta^+(s) \gt 0 \end{equation*}

and therefore $Y_0(s) \gt Y^{\prime}_0(s) \ge 0$ . This means that $Y_0$ does not hit zero on $(\tau(t), t]$ , so $L_0(t) = L_0(\tau(t))$ by definition of the reflection function. Moreover, since $Y_0 - Y^{\prime}_0$ is continuous, $Y_0(\tau(t)) - Y^{\prime}_0(\tau(t)) = 0$ , and hence

\begin{align*} Y_0(t) - Y^{\prime}_0 (t) &= -\int_{\tau(t)}^t \left( \alpha(s, Y_0(s)) - \alpha(s, Y^{\prime}_0 (s)) \right)ds + L^{\prime}_0(\tau(t)) - L^{\prime}_0 (t). \end{align*}

However, $Y_0(t) - Y^{\prime}_0 (t) \gt 0$ and $L^{\prime}_0(\tau(t)) - L^{\prime}_0 (t) \le 0$ ; therefore

\begin{align*} \Delta^+(t) &\le \left| \int_{\tau(t)}^t \left( \alpha(s, Y_0(s)) - \alpha(s, Y^{\prime}_0 (s)) \right)ds \right| \\ &\le \int_{0}^t \left| \alpha(s, Y_0(s)) - \alpha(s, Y^{\prime}_0 (s)) \right|ds \\ &\le c \int_{0}^t \left| Y_0(s) - Y^{\prime}_0 (s) \right| ds, \end{align*}

which also holds true if $\tau(t) = t$ (i.e. if $\Delta^+(t) = 0$ ). Similarly,

\begin{align*} \Delta^-(t) & \le c \int_{0}^t \left| Y_0(s) - Y^{\prime}_0 (s) \right| ds, \end{align*}

and hence, for all $t\in[0,T]$ ,

(3.5) \begin{equation} |Y_0(t) - Y^{\prime}_0(t)| \le c \int_{0}^t \left| Y_0(s) - Y^{\prime}_0 (s) \right| ds. \end{equation}

The equality of $Y_0(t)$ and $Y^{\prime}_0(t)$ with probability 1 now follows immediately from Gronwall’s lemma and (3.5), which in turn immediately implies that $L_0(t) = L^{\prime}_0(t)$ a.s.

Note that Theorem 3.1 does not clarify whether the solution to (3.3) exists. Moreover, the existence arguments from [Reference Skorokhod33, Reference Skorokhod34] cannot be straightforwardly translated to the problem (3.3), since e.g. [Reference Skorokhod34, Lemma 4] exploits the independence of increments of the driver, which is not available to us because of the generality of Z. However, the next result not only proves the existence of the solution to (3.3) but also establishes the connection between (3.1) and (3.3).

Theorem 3.2. Let $Y_\varepsilon$ be the solution to (3.1). Then, with probability 1,

(3.6) \begin{equation} \sup_{t\in[0,T]}|Y_\varepsilon(t) - Y_0(t)| \to 0, \quad \sup_{t\in[0,T]}\left|\int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds - L_0(t)\right| \to 0 \quad \text{as }\varepsilon \downarrow 0, \end{equation}

where $(Y_0, L_0)$ is the solution to the Skorokhod reflection problem (3.3).

Proof. Fix an arbitrary path $\omega \in \Omega$ such that $Z(\omega, t)$ is $\lambda$ -Hölder continuous with respect to t (in what follows, $\omega$ in brackets will be omitted). For any fixed t, $Y_\varepsilon(t)$ is non-increasing with respect to $\varepsilon$ by Lemma 1.2, and hence the limit

\begin{equation*} Y_0(t) \,:\!=\, \lim_{\varepsilon \downarrow 0} Y_\varepsilon(t) \end{equation*}

is well defined. Since $\alpha$ is continuous,

\begin{equation*} \alpha(s, Y_\varepsilon(s)) \to \alpha(s,Y_0(s)), \quad \varepsilon \downarrow 0. \end{equation*}

Moreover, (3.2) implies that there exists a constant $C>0$ such that

\begin{equation*} |\alpha(t,y)| \le C(1 + |y|), \quad t\in [0,T],\,y\in\mathbb R; \end{equation*}

hence, by Lemma 1.2 and Theorem 2.3, for any $\varepsilon \in (0,1]$ and $s\in[0,T]$ ,

\begin{equation*} |\alpha(s,Y_{\varepsilon}(s))| \le C(1+ |Y_{\varepsilon}(s)|) \le C(1+ |Y_{1}(s)|) \le C(1 + M_1(1,T) + M_2(1,T)\Lambda). \end{equation*}

Therefore, by the dominated convergence theorem, for any $t\in[0,T]$

\begin{equation*} \int_0^t \alpha(s, Y_\varepsilon(s))ds \to \int_0^t \alpha(s, Y_0(s))ds, \quad \varepsilon \downarrow 0. \end{equation*}

In particular, this means that the left-hand side of

\begin{equation*} Y_\varepsilon(t) - Y(0) + \int_0^t \alpha(s, Y_\varepsilon(s))ds - Z(t) = \int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds \end{equation*}

converges for any $t\in[0,1]$ , and hence there exists the limit

\begin{equation*} L_0(t) \,:\!=\, \lim_{\varepsilon \downarrow 0} \int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds. \end{equation*}

It remains to prove that $L_0$ is the $\varphi$ -reflection function for $Y_0$ . For the reader’s convenience, the rest of the proof will be split into four steps.

Step 1. It is easy to see by definition that $L_0(0) = 0$ , $L_0(\cdot)$ is non-decreasing, and $Y_0(t) \ge \varphi(t)$ , $t\in[0,T]$ .

Step 2. Let us prove the continuity of $L_0$ on (0, T). Take $t\in(0,T)$ and assume that $L_0(t{+}) - L_0(t{-}) = \ell \gt 0$ (one-sided limits of $L_0$ —and hence of $Y_0$ —exist by monotonicity of $L_0$ ). Since

\begin{equation*} Y_0(t) = Y(0) - \int_0^t \alpha(s,Y_0(s))ds + Z(t) + L_0(t), \end{equation*}

this implies that $Y_0(t{+}) - Y_0(t{-}) = \ell$ . Moreover, since $L_0$ is non-decreasing, $L_0(t{-}) \le L_0(t) \le L_0(t{+})$ , which in turn implies that $Y_0(t{-}) \le Y_0(t) \le Y_0(t{+})$ .

Consider now the only two possible cases.

Case 1: $Y_0(t{-}) - \varphi(t{-}) = Y_0(t{-}) - \varphi(t) = y \gt 0$ . Since the left-sided limit $Y(t{-})$ exists and $\varphi$ is continuous, there exists $\delta \gt 0$ such that for all $s \in [t-\delta, t]$ , $Y_0(s) - \varphi(s) \gt \frac{y}{2} \gt 0$ . Moreover, since $Y_0$ is assumed to have a positive jump in t and $Y(t{+})$ exists, one can choose $\delta \gt 0$ such that $Y_0(s) - \varphi(s) \gt \frac{y}{2} \gt 0$ for all $s \in [t-\delta, t+\delta]$ . Thus, for any $t_1, t_2 \in [t-\delta, t+\delta]$ ,

\begin{equation*} L_0(t_2) - L_0(t_1) = \lim_{\varepsilon \downarrow 0} \int_{t_1}^{t_2} \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds \le \lim_{\varepsilon \downarrow 0} \int_{t_1}^{t_2} \frac{\varepsilon}{(Y_0(s) - \varphi(s))^\gamma}ds = 0, \end{equation*}

and hence in this case $L_0(t{-}) = L_0(t{+}) = L_0(t)$ , which contradicts the assumption $L_0(t{+}) - L_0(t{-}) = \ell \gt 0$ .

Case 2: $Y_0(t{-}) - \varphi(t) = 0$ , $Y_0(t{+}) - \varphi(t) = \ell \gt 0$ . Choose $\varepsilon_1$ , $\delta_1 \gt 0$ such that $\varepsilon_1 \lt 1$ , $t+\delta_1 \lt T$ , and

(3.7) \begin{equation} \varepsilon_1 + 2^{\gamma}(K+\Lambda)\delta_1^{\gamma} + 2 \delta_1 + 2C\left(1+ M_1(1,T) + M_2(1,T)\Lambda + 2\max_{s\in[0,T]}|\varphi(s)|\right)\delta_1 \lt \frac{\ell}{2}, \end{equation}

where K is such that $|\varphi(s_1) - \varphi(s_2)| \le K|s_1 - s_2|^\lambda$ , $s_1,s_2 \in [0,T]$ , $\Lambda$ is from (Z2), C is such that $|\alpha(s,y)| \le C(1+|y|)$ , and $M_1(1,T)$ , $M_2(1,T)$ are such that

\begin{equation*} \sup_{s\in[0,T]}|Y_1(s)| \le M_1(1,T) + M_2(1,T)\Lambda. \end{equation*}

Next, note that there exists $\delta_2 \lt \delta_1$ such that $Y_0(t-\delta_2) - \varphi(t-\delta_2) \lt \varepsilon_1$ and $Y_0(t+\delta_2) - \varphi(t+\delta_2) \gt \frac{\ell}{2}$ . Moreover, there exists $\varepsilon_2 \lt \varepsilon_1^\gamma \wedge \varepsilon_1$ such that $Y_{\varepsilon_2}(t-\delta_2) - \varphi(t-\delta_2) \lt \varepsilon_1$ , and since

(3.8) \begin{equation} Y_{\varepsilon_2}(t+\delta_2) - \varphi(t+\delta_2) \ge Y_0(t+\delta_2) - \varphi(t+\delta_2) \gt \frac{\ell}{2}, \end{equation}

one can define

\begin{equation*} \tau \,:\!=\, \sup\{s\in(t-\delta_2, t+\delta_2)\,|\,Y_{\varepsilon_2}(s) - \varphi(s) = \varepsilon_1\}. \end{equation*}

By continuity, $Y_{\varepsilon_2}(\tau) - \varphi(\tau) = \varepsilon_1$ , and by the definition of $\tau$ , $Y_{\varepsilon_2}(s) - \varphi(s) \ge \varepsilon_1$ for all $s\in[\tau,t+\delta_2)$ . Hence

\begin{align*} Y_{\varepsilon_2}(t+\delta_2) &= Y_{\varepsilon_2}(\tau) + \int_{\tau}^{t+\delta_2} \frac{\varepsilon_2}{(Y_{\varepsilon_2}(s) - \varphi(s))^\gamma}ds \\ &\qquad - \int_{\tau}^{t+\delta_2}\alpha(s, Y_{\varepsilon_2}(s))ds + Z(t+\delta_2) - Z(\tau) \\ & = \varphi(t+\delta_2) + \left( Y_{\varepsilon_2}(\tau) - \varphi(\tau) \right) + \left(\varphi(\tau) - \varphi(t+\delta_2)\right) \\ &\qquad + \int_{\tau}^{t+\delta_2} \frac{\varepsilon_2}{(Y_{\varepsilon_2}(s) - \varphi(s))^\gamma}ds \\ &\qquad - \int_{\tau}^{t+\delta_2}\alpha(s, Y_{\varepsilon_2}(s))ds + Z(t+\delta_2) - Z(\tau) \\ &\le \varphi(t+\delta_2) + \varepsilon_1 + K(t+\delta_2 - \tau)^\gamma + \frac{\varepsilon_2}{\varepsilon_1^\gamma}(t+\delta_2 - \tau) \\ &\qquad + C\left( 1+ \sup_{s\in[0,T]}|Y_{\varepsilon_2}(s)| \right) (t+\delta_2 - \tau) + \Lambda (t+\delta_2 - \tau)^\gamma. \end{align*}

Note that

\begin{align*} \sup_{s\in[0,T]}|Y_{\varepsilon_2}(s)| & \le \sup_{s\in[0,T]}|Y_{1}(s)| + 2 \max_{s\in[0,T]}|\varphi(s)| \\ & \le M_1(1,T) + M_2(1,T)\Lambda + 2\max_{s\in[0,T]}|\varphi(s)|, \end{align*}

whence, by (3.7) and the fact that $\frac{\varepsilon_2}{\varepsilon_1^\gamma} \le 1$ ,

\begin{align*} Y_{\varepsilon_2}(t+\delta_2) - \varphi(t+\delta_2)& \le \varepsilon_1 + 2^\gamma (K+\Lambda)\delta_1^\gamma + 2\delta_1 \\ &\qquad + 2C\left( 1+ M_1(1,T) + M_2(1,T)\Lambda + 2\max_{s\in[0,T]}|\varphi(s)| \right) \delta_1 \\ & <\frac{\ell}{2}, \end{align*}

which contradicts (3.8).

The contradictions in both cases above prove that $L_0$ is continuous at any point $t\in(0,T)$ .

Step 3 . Let us show that $L_0$ is continuous at 0 and at T.

Left-continuity at T. Let $\widetilde T \gt T$ . Define

\begin{align*} \widetilde \varphi(t) = \begin{cases} \varphi(t), \quad t\in [0,T], \\ \varphi(T), \quad t\in[T, \widetilde T], \end{cases} \quad \widetilde Z(t) = \begin{cases} Z(t), \quad t\in [0,T], \\ Z(T), \quad t\in[T, \widetilde T], \end{cases} \\ \widetilde \alpha(t,y) = \begin{cases} \alpha(t,y), \quad t\in [0,T], \\ \alpha(T,y), \quad t\in[T, \widetilde T], \end{cases} \end{align*}

and consider

\begin{equation*} \widetilde Y_\varepsilon(t) = Y(0) + \int_0^t \frac{\varepsilon}{\left(\widetilde Y_\varepsilon(s) - \widetilde \varphi(s)\right)^\gamma}ds - \int_0^t \widetilde \alpha(s, \widetilde Y_\varepsilon(s))ds + \widetilde Z(t). \end{equation*}

Arguments similar to those above prove that $\widetilde Y_0(t) \,:\!=\, \lim_{\varepsilon \downarrow 0} \widetilde Y_\varepsilon(t)$ and

\begin{equation*} \widetilde L_0(t) \,:\!=\, \lim_{\varepsilon \downarrow 0} \int_0^t \frac{\varepsilon}{\left(\widetilde Y_\varepsilon(s) - \widetilde \varphi(s)\right)^\gamma}ds \end{equation*}

are well defined and continuous at any point $t\in(0,\widetilde T)$ . Moreover, $\widetilde Y_\varepsilon$ , $\widetilde L_0$ , and $ \widetilde Y_0(t)$ coincide with $Y_\varepsilon$ , $L_0$ , and $Y_0$ respectively on [0, T]; hence $L_0$ and $Y_0$ are left-continuous at $T \in (0, \widetilde T)$ .

Right-continuity at 0. By Lemma 1.2, each $Y_\varepsilon$ exceeds the process U defined by

\begin{equation*} U(t) = Y(0) - \int_0^t \alpha(s, U(s))ds + Z(t). \end{equation*}

Define $\tau \,:\!=\, \inf\{t\in[0,T]\,|\,U(t) - \varphi(t) = Y(0)/2\}$ . Then, for any $t\in[0,\tau]$ , we have $Y_\varepsilon(t) - \varphi(t)\ge U(t) - \varphi(t) \ge \frac{Y(0)}{2}$ , and hence

\begin{equation*} L_0(t) = \lim_{\varepsilon \downarrow 0} \int_0^t \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds \le \lim_{\varepsilon \downarrow 0} \frac{2^\gamma \tau}{Y^\gamma(0)} \varepsilon = 0, \end{equation*}

i.e. $L_0(0{+}) = 0 = L_0(0)$ .

Step 4. It remains to prove that $L_0$ has points of growth only in those $t\in [0,T]$ such that $Y_0(t) = \varphi(t)$ . Let t be such that $Y_0(t) - \varphi(t) = y \gt 0$ . Then, by continuity of $Y_0$ , there exists an interval $[t-\delta, t+\delta]$ such that $Y_0(s) - \varphi(s) \gt \frac{y}{2}$ for all $s\in [t-\delta, t+\delta]$ , and hence, for any $\varepsilon>0$ , $Y_\varepsilon(s) - \varphi(s) \gt \frac{y}{2}$ . Therefore,

\begin{equation*} L(t+\delta) - L(t-\delta) = \lim_{\varepsilon \downarrow 0} \int_{t-\delta}^{t+\delta} \frac{\varepsilon}{(Y_\varepsilon(s) - \varphi(s))^\gamma}ds \le \lim_{\varepsilon \downarrow 0} \frac{2^{1+\gamma} \delta}{y^\gamma} \varepsilon = 0, \end{equation*}

i.e. t is not a point of growth for $L_0$ .

Therefore, $L_0$ is a $\varphi$ -reflection function for $Y_0$ , and the pair $(Y_0, L_0)$ is the unique solution to the Skorokhod reflection problem (3.3) as required. Note that the uniform convergence from (3.6) immediately follows from the continuity of $L_0$ (and hence $Y_0$ ) and the pointwise convergence established above.

Remark 3.1. Theorem 3.2 can be regarded as a generalization of Theorem 3.1 from [Reference Mishura and Yurchenko-Tytarenko29], which considered the sandwiched process of the type

\begin{equation*} Y_\varepsilon (t) = Y(0) + \int_0^t\left( \frac{\varepsilon}{Y_\varepsilon(s)} - b Y_\varepsilon(s) \right)ds + \sigma B^H(t), \end{equation*}

where $B^H$ is a fractional Brownian motion with a Hurst index $H>\frac{1}{2}$ . When $\varepsilon \downarrow 0$ , $Y_\varepsilon$ converges to a reflected fractional Ornstein–Uhlenbeck (RFOU) process, and the reflection function of the latter can be represented as

\begin{equation*} L_0(t) = \lim_{\varepsilon \downarrow 0} \int_0^t \frac{\varepsilon}{Y_\varepsilon(s)}ds, \quad t\in [0,T]. \end{equation*}

Theorem 3.2 shows that the reflection function of the RFOU process can also be represented as

\begin{equation*} L_0(t) = \lim_{\varepsilon \downarrow 0} \int_0^t \frac{\varepsilon}{Y^\gamma_{\varepsilon, \gamma}(s)}ds, \quad t\in [0,T], \end{equation*}

where

\begin{equation*} Y_{\varepsilon, \gamma} (t) = Y(0) + \int_0^t\left( \frac{\varepsilon}{Y^\gamma_{\varepsilon, \gamma}(s)} - b Y_{\varepsilon, \gamma}(s) \right)ds + \sigma B^H(t), \end{equation*}

and the value of the limit does not depend on $\gamma$ .

Remark 3.2. Note that the arguments in this subsection are pathwise, and hence they hold without any changes if the lower boundary $\varphi$ is itself a stochastic process.

4. Two-sided sandwich SDE

The fact that, under Assumptions (A1)(A4), the solution Y of (0.3) stays above the function $\varphi$ is essentially based on the rapid growth to infinity of b(t, Y(t)) whenever Y(t) approaches $\varphi(t)$ , $t\ge 0$ . The same effect is exploited to get an equation whose solution has both upper and lower boundaries.

Specifically, let $\varphi$ , $\psi$ : $[0,T] \to \mathbb R$ be $\lambda$ -Hölder continuous functions, $\lambda\in(0,1)$ , such that $\varphi(t) \lt \psi(t)$ , $t \in [0,T]$ . For an arbitrary pair $a_1, a_2 \in \mathbb R$ define

(4.1) \begin{equation} \mathcal D_{a_1, a_2} \,:\!=\, \{(t,y)\,|\,t\in[0,T], y\in (\varphi(t)+a_1, \psi(t) - a_2)\}\end{equation}

and consider an SDE of the form (0.3), with Z being, as before, a stochastic process with $\lambda$ -Hölder continuous trajectories, and with the initial value Y(0) and the drift b satisfying the following assumption.

Assumption 4.1. The initial value $\varphi(0) \lt Y(0) \lt \psi(0)$ is deterministic, and the drift b is such that the following hold:

  1. (B1) The function b: $\mathcal D_{0,0} \to \mathbb R$ is continuous.

  2. (B2) For any pair $\varepsilon_1$ , $\varepsilon_2 \gt0$ such that $\varepsilon_1+\varepsilon_2 \lt \lVert \varphi - \psi\rVert_\infty$ , there is a constant $c_{\varepsilon_1, \varepsilon_2} \gt 0$ such that for any $(t,y_1), (t, y_2) \in \mathcal D_{\varepsilon_1, \varepsilon_2}$ ,

    \begin{equation*} |b(t,y_1) - b(t, y_2)| \le c_{\varepsilon_1, \varepsilon_2} |y_1 - y_2|. \end{equation*}
  3. (B3) There are constants $\gamma$ , $y_{*} \gt 0$ , $y_{*} \lt \frac{1}{2}\lVert \varphi - \psi\rVert_\infty$ , and $c \gt 0$ such that for all $(t,y) \in {\mathcal D}_{0, 0} \setminus {\mathcal D}_{y_*, 0}$ ,

    \begin{equation*} b(t,y) \ge \frac{c}{\left(y- \varphi(t)\right)^\gamma}, \end{equation*}
    and for all $(t,y) \in {\mathcal D}_{0, 0} \setminus {\mathcal D}_{0, y_*}$ ,
    \begin{equation*} b(t,y) \le - \frac{c}{\left(\psi(t) - y\right)^\gamma}. \end{equation*}
  4. (B4) The constant $\gamma$ from Assumption (B3) satisfies the condition

    \begin{equation*} \gamma \gt \frac{1-\lambda}{\lambda}, \end{equation*}
    with $\lambda$ being the order of Hölder continuity of $\varphi$ , $\psi$ , and paths of Z.

Example 4.1. Let $\alpha_1$ : $[0,T] \to (0,\infty)$ , $\alpha_2$ : $[0,T] \to (0,\infty)$ and $\alpha_3$ : $\mathcal D_{0,0} \to \mathbb R$ be continuous, with

\begin{equation*} |\alpha_3(t,y_1) - \alpha_3(t,y_2)| \le C|y_1-y_2|, \quad (t,y_1),(t,y_2)\in \mathcal D_{0,0}, \end{equation*}

for some constant $C>0$ . Then

\begin{equation*} b(t,y) \,:\!=\, \frac{\alpha_1(t)}{(y - \varphi(t))^\gamma}- \frac{\alpha_2(t)}{(\psi(t) - y)^\gamma} - \alpha_3(t,y), \quad t\in[0,T], y\in\mathcal D_{0,0}, \end{equation*}

satisfies Assumptions (B1)(B4) provided that $\gamma \gt \frac{1 - \lambda}{\lambda}$ .

Following the arguments of Subection 2.1, it is straightforward to verify that the following result holds.

Theorem 4.1. Let Assumptions (B1)(B4) hold. Then the equation (0.3) has a unique solution $Y = \{Y(t),\,t\in [0,T]\}$ such that

(4.2) \begin{equation} \varphi(t) \lt Y(t) <\psi(t), \quad t\in[0,T]. \end{equation}

Moreover, using the arguments in the proof of Theorem 2.4, one can check that the bounds (4.2) can be refined as follows.

Theorem 4.2. Let $r>0$ be fixed.

  1. 1. Under Assumptions (B1)(B4), there exists a constant $L \gt0$ depending only on $\lambda$ , $\gamma$ , and the constant c from Assumption (B3) such that the solution Y to the equation (0.3) has the property

    \begin{equation*} \varphi(t) + L \widetilde \Lambda^{- \frac{1}{\gamma\lambda + \lambda - 1}} \,\le\, Y(t) \,\le\, \psi(t) - L \widetilde \Lambda^{- \frac{1}{\gamma\lambda + \lambda - 1}}, \quad t\in[0,T], \end{equation*}
    where
    \begin{equation*} \widetilde \Lambda \,:\!=\, \max\left\{ \Lambda, K, \left(4 \beta\right)^{\lambda - 1} \left(\frac{(Y(0) - \varphi(0)) \wedge y_* \wedge (\psi(0) - Y(0))}{2}\right)^{1 - \lambda - \gamma\lambda} \right\}, \end{equation*}
    with
    \begin{equation*} \beta \,:\!=\, \frac{ \lambda^{\frac{\lambda}{1 - \lambda}} - \lambda^{\frac{1}{1 - \lambda}} }{ \left(2^{\gamma} c \right)^{\frac{\lambda}{1 - \lambda}}} \gt 0 \end{equation*}
    and K being such that
    \begin{equation*} |\varphi(t) - \varphi(s)| + |\psi(t) - \psi(s)| \le K |t-s|^{\lambda}, \quad t,s\in[0,T]. \end{equation*}
  2. 2. If $\Lambda$ can be chosen in such a way that $\mathbb E \Lambda^{\frac{r}{\gamma\lambda + \lambda - 1}} \lt \infty$ , then

    \begin{equation*} \mathbb E \left[ \sup_{t\in [0,T]} (Y(t) - \varphi(t))^{-r} \right] \lt \infty \quad\text{and}\quad \mathbb E \left[ \sup_{t\in [0,T]} (\psi(t) - Y(t))^{-r} \right] \lt \infty. \end{equation*}

5. Stochastic volatility: generalized CIR and CEV

In this section, we show how two classical processes used in stochastic volatility modeling can be generalized under our framework.

5.1. CIR and CEV processes driven by a Hölder continuous noise

Let $\varphi \equiv 0$ . Consider

\begin{equation*} b(y) = \frac{\kappa}{y^{\frac{\alpha}{1-\alpha}}} - \theta y,\end{equation*}

where $\kappa$ , $\theta \gt0$ are positive constants, $\alpha \in \left(0, 1\right)$ , and the process Z is a process with $\lambda$ -Hölder continuous paths with $\alpha + \lambda \gt1$ . It is easy to verify that for $\gamma = \frac{\alpha}{1-\alpha}$ Assumptions (A1)(A4) hold and the process Y satisfying the SDE

(5.1) \begin{equation} dY(t) = \left(\frac{\kappa}{Y^{\frac{\alpha}{1-\alpha}}(t)} - \theta Y(t)\right)dt + dZ(t)\end{equation}

exists and is unique and positive. Furthermore, as noted in Theorems 2.3 and 2.4, if the corresponding Hölder continuity constant $\Lambda$ can be chosen to have all positive moments, Y will have moments of all real orders, including the negative ones.

The process $X = \{X(t),\,t\in[0,T]\}$ such that

\begin{equation*} X(t) = Y^{\frac{1}{1 - \alpha}}(t), \quad t\in [0,T],\end{equation*}

can be interpreted as a generalization of a CIR (if $\alpha = \frac{1}{2}$ ) or CEV (for general $\alpha$ ) process in the following sense. Assume that $\lambda \gt \frac{1}{2}$ . Fix the partition $0 = t_0 \lt t_1 \lt t_2 \lt ... \lt t_n = t$ , where $t\in[0,T]$ , $\lvert \Delta t \rvert \,:\!=\, \max_{k=1,...,n}(t_{k}-t_{k-1})$ . It is clear that

\begin{equation*} X(t) = X(0) + \sum_{k=1}^n (X({t_k}) - X({t_{k-1}})) = X(0) + \sum_{k=1}^n (Y^\frac{1}{1-\alpha}({t_k}) - Y^\frac{1}{1-\alpha}({t_{k-1}})),\end{equation*}

so using the Taylor expansion we obtain that

\begin{align*} X(t) = X(0) + \sum_{k=1}^n &\bigg( \frac{1}{1-\alpha} Y^\frac{\alpha}{1-\alpha}({t_{k-1}}) (Y({t_k}) - Y({t_{k-1}})) \\ &\qquad + \frac{\alpha \Theta_{n,k}^{\frac{2\alpha-1}{1-\alpha}}}{2(1-\alpha)^2}(Y({t_k}) - Y({t_{k-1}}))^2\bigg),\end{align*}

with $\Theta_{n,k}$ being a real value between $Y({t_k})$ and $Y({t_{k-1}})$ .

Note that, by Theorem 2.3 (for $\alpha \in \left[ \frac 1 2, 1\right)$ ) or Theorem 2.4 (for $\alpha \in \left( 0,\frac{1}{2} \right)$ ),

\begin{equation*} \sup_{\substack{n\ge 1, \\ k = 0,1,...,n}} \Theta_{n,k}^{\frac{2\alpha-1}{1-\alpha}} \lt \infty.\end{equation*}

Moreover, using Equation (5.1) and Theorem 2.4, it is easy to prove that Y has trajectories which are $\lambda$ -Hölder continuous. Therefore, since $\lambda>\frac{1}{2}$ ,

(5.2) \begin{equation} \sum_{k=1}^n \frac{\lambda \Theta_{n,k}^{\frac{2\alpha-1}{1-\alpha}}}{2(1-\alpha)^2}(Y({t_k}) - Y({t_{k-1}}))^2 \to 0, \quad \lvert \Delta t\rvert \to 0,\end{equation}

and

(5.3) \begin{equation}\begin{aligned} \sum_{k=1}^n &\frac{1}{1-\alpha} Y^\frac{\alpha}{1-\alpha}({t_{k-1}}) (Y({t_k}) - Y({t_{k-1}})) = \frac{1}{1-\alpha} \sum_{k=1}^n X^\alpha({t_{k-1}}) (Y({t_k}) - Y({t_{k-1}})) \\ &= \frac{1}{1-\alpha} \sum_{k=1}^n X^\alpha({t_{k-1}})\left(\int_{t_{k-1}}^{t_k} \left(\frac{\kappa}{Y(s)^{\frac{\alpha}{1-\alpha}}} - \theta Y(s)\right)ds + (Z({t_{k}}) - Z({t_{k-1}}))\right) \\ &= \frac{1}{1-\alpha} \sum_{k=1}^n X^\alpha({t_{k-1}})\int_{t_{k-1}}^{t_k} \left(\frac{\kappa}{X^\alpha(s)} - \theta X^{1-\alpha}(s)\right)ds \\ & \quad + \frac{1 }{1-\alpha} \sum_{k=1}^n X^\alpha({t_{k-1}})(Z({t_{k}}) - Z({t_{k-1}})) \\ &\to \frac{1}{1-\alpha} \int_0^t (\kappa - \theta X(s))ds + \frac{1 }{1-\alpha} \int_0^t X^\alpha(s) dZ(s), \quad \lvert \Delta t \rvert \to 0.\end{aligned}\end{equation}

Note that the integral with respect to Z in (5.3) exists as a pathwise limit of Riemann–Stieltjes integral sums, owing to sufficient Hölder continuity of both the integrator and integrand; see e.g. [Reference Zähle37].

Taking into account all of the above, X satisfies (pathwise) the SDE of CIR (or CEV) type, namely

(5.4) \begin{equation}\begin{aligned} dX(t) &= \left(\frac{\kappa}{1-\alpha} - \frac{\theta}{1-\alpha} X(t)\right)dt + \frac{1}{1-\alpha} X^\alpha(t) dZ(t) \\ & = (\widetilde \kappa - \widetilde \theta X(t))dt + \widetilde \nu X^\alpha(t) dZ(t),\end{aligned}\end{equation}

where the integral with respect to Z is the pathwise Riemann–Stieltjes integral.

Remark 5.1. The integral $\int_0^t X^\alpha(s) dZ(s)$ arising above is a pathwise Young integral; see e.g. [Reference Friz and Hairer18, Section 4.1] and references therein.

Remark 5.2. Note that the reasoning described above also implies that, for $\alpha\in\left[\frac{1}{2}, 1\right)$ and $\lambda>\frac{1}{2}$ , the SDE (5.4), where the integral with respect to Z is understood pathwise, has a unique strong solution in the class of non-negative stochastic processes with $\lambda$ -Hölder continuous trajectories. Indeed, $\{Y^{\frac{1}{1-\alpha}}(t),\,t\in[0,T]\}$ with Y defined by (5.1) is a solution to (5.4). Moreover, if X is another solution to (5.4), then by the chain rule [Reference Zähle37, Theorem 4.3.1], the process $\{X^{1-\alpha}(t),\,t\in[0,T]\}$ must satisfy the SDE (5.1) until the first moment of hitting zero. However, the SDE (5.1) has a unique solution that never hits zero, and thus $X^{1-\alpha}$ coincides with Y.

Remark 5.3. Some of the properties of the process Y given by (5.1) in the case of $\lambda = \frac{1}{2}$ and Z being a fractional Brownian motion with $H>\frac{1}{2}$ were discussed in [Reference Mishura and Yurchenko-Tytarenko26].

5.2. Mixed-fractional CEV process

Assume that $\kappa$ , $\theta$ , $\nu_1$ , $\nu_2$ are positive constants, $B = \{B(t),\,t\in[0,T]\}$ is a standard Wiener process, $B^H = \{B^H(t),\,t\in[0,T]\}$ is a fractional Brownian motion independent of B with $H\in\left(0,1\right)$ , $Z = \nu_1 B + \nu_2 B^H$ , $\alpha \in\left(\frac{1}{2}, 1\right)$ is such that $H \wedge \frac{1}{2} + \alpha \gt 1$ , and the function b has the form

\begin{equation*} b(y) = \frac{\kappa}{y^{\frac{\alpha}{1-\alpha}}} - \frac{\alpha \nu_1^2}{2y} - \theta y.\end{equation*}

Then the process Y defined by the equation

(5.5) \begin{equation} dY(t) = \left(\frac{\kappa}{Y(t)^{\frac{\alpha}{1-\alpha}}} - \frac{\alpha \nu_1^2}{2(1-\alpha)Y(t)} - \theta Y(t)\right)dt + \nu_1 dB(t) + \nu_2 dB^H(t)\end{equation}

exists, is unique and positive, and has all the moments of real orders.

If $H>\frac{1}{2}$ , just as in Subsection 5.1, the process $X(t) \,:\!=\, Y^{\frac{1}{1 - \alpha}}(t)$ , $t\in [0,T]$ , can be interpreted as a generalization of the CEV process.

Proposition 5.1. Let $H>\frac{1}{2}$ . Then the process $X(t) \,:\!=\, Y(t)^{\frac{1}{1 - \alpha}}$ , $t\in [0,T]$ , satisfies the SDE of the form

(5.6) \begin{equation} dX(t) = \left(\frac{\kappa}{1-\alpha} - \frac{\theta}{1-\alpha} X(t)\right)dt + \frac{\nu_1}{1-\alpha} X^\alpha(t) dB(t) + \frac{\nu_2}{1-\lambda} X^\alpha(t) dB^H(t), \end{equation}

where the integral with respect to B is the regular Itô integral (with respect to the filtration generated jointly by $(B, B^H)$ ), and the integral with respect to $B^H$ is understood as the $L^2$ -limit of Riemann–Stieltjes integral sums.

Remark 5.4. Note that B is a martingale with respect to the filtration generated jointly by $(B,B^H)$ , $X^\alpha$ is adapted to this filtration, and

\begin{equation*} \int_0^t \mathbb E[X^{2\alpha} (s)]ds \lt \infty, \end{equation*}

i.e. the Itô integral $\int_0^t X^{\alpha}(s) dB(s)$ is well defined.

Proof. We will use an argument that is similar to the one presented in Subsection 5.1, with one main difference: since we are going to treat the integral with respect to the Brownian motion B as a regular Itô integral, all the convergences (including convergence of integral sums with respect to $B^H$ ) must be considered in the $L^2$ sense. For the reader’s convenience, we split the proof into several steps.

Step 1. First we will prove that the integral $\int_0^t X^\alpha(s) dB^H(s)$ is well defined as the $L^2$ -limit of Riemann–Stieltjes integral sums. Let $0 = t_0 \lt t_1 \lt t_2 \lt ... \lt t_n = t$ be a partition of [0, t] with the mesh $|\Delta t| \,:\!=\, \max_{k=0,...,n-1}(t_{k+1} - t_k)$ .

Choose $\lambda \in \left(\frac{1}{2}, H\right)$ , $\lambda^{\prime} \in \left(0,\frac{1}{2}\right)$ , and $\varepsilon \gt0$ such that $\lambda + \lambda^{\prime} \gt 1$ and $\lambda + \varepsilon \lt H$ , $\lambda^{\prime} + \varepsilon \lt \frac{1}{2}$ . Using Theorem 2.4 and the fact that for any $\lambda^{\prime} \in \left(0,\frac{1}{2}\right)$ the random variable $\Lambda_{Z,\lambda^{\prime}+\varepsilon}$ which corresponds to the noise Z and Hölder order $\lambda^{\prime}+\varepsilon$ can be chosen to have moments of all orders, it is easy to prove that there exists a random variable $\Upsilon_X$ having moments of all orders such that

\begin{equation*} |X^\alpha(t) - X^\alpha(s)| \le \Upsilon_X |t-s|^{\lambda^{\prime} + \varepsilon}, \quad s,t\in[0,T], \quad a.s. \end{equation*}

By the Young–Loève inequality (see e.g. [Reference Friz and Victoir19, Theorem 6.8]), it holds a.s. that

\begin{align*} \bigg|\int_0^t X^\alpha(s) dB^H(s) &- \sum_{k=0}^{n-1} X^\alpha({t_k})(B^H({t_{k+1}}) - B^H({t_k}))\bigg| \\ &\le \sum_{k=0}^{n-1} \left|\int_{t_k}^{t_{k+1}} X^\alpha(s) dB^H(s) - X^\alpha({t_k})(B^H({t_{k+1}}) - B^H({t_k}))\right| \\ &\le \frac{1}{2^{1-(\lambda + \lambda^{\prime})}}\sum_{k=0}^{n-1} [X^\alpha]_{\lambda^{\prime}; [t_k, t_{k+1}]} [B^H]_{\lambda; [t_k, t_{k+1}]}, \end{align*}

where

\begin{equation*} [f]_{\lambda; [t,t^{\prime}]} \,:\!=\, \left(\sup_{\Pi[t, t^{\prime}]} \sum_{l=0}^{m-1} |f(s_{l+1}) - f(s_l)|^{\frac{1}{\lambda}}\right)^\lambda, \end{equation*}

with the supremum taken over all partitions $\Pi[t,t^{\prime}] = \{t=s_0 \lt ... \lt s_m = t^{\prime}\}$ of [t, t’].

It is clear that, a.s.,

\begin{align*} [X^\alpha]_{\lambda^{\prime}; [t_k, t_{k+1}]} &= \left(\sup_{\Pi[t_k, t_{k+1}]} \sum_{l=0}^{m-1} |X^\alpha(s_{l+1}) - X^\alpha(s_l)|^{\frac{1}{\lambda^{\prime}}}\right)^{\lambda^{\prime}} \\ &\le \Upsilon_X \left(\sup_{\Pi[t_k, t_{k+1}]} \sum_{k=0}^{m-1} (s_{l+1} - s_l)^{1+ \frac{\varepsilon}{\lambda^{\prime}}}\right)^{\lambda^{\prime}} \\ &\le \Upsilon_X |\Delta t|^{\lambda^{\prime}+\varepsilon} \end{align*}

and similarly

\begin{align*} [B^H]_{\lambda; [t_k, t_{k+1}]} \le \Lambda_{B^H} |\Delta t|^{\lambda+\varepsilon}, \end{align*}

where $\Lambda_{B^H}$ has moments of all orders and

\begin{equation*} |B^H(t) - B^H(s)| \le \Lambda_{B^H} |t-s|^{\lambda +\varepsilon}, \end{equation*}

whence

\begin{align*} \mathbb E\bigg|\int_0^t X^\alpha(s) dB^H(s) - &\sum_{k=0}^{n-1} X^\alpha({t_k})(B^H({t_{k+1}}) - B^H({t_k}))\bigg|^2 \\ &\le \mathbb E\left[\left(\frac{1}{2^{1-(\lambda + \lambda^{\prime})}}\sum_{k=0}^{n-1} [X^\alpha]_{\lambda^{\prime}; [t_k, t_{k+1}]} [B^H]_{\lambda; [t_k, t_{k+1}]}\right)^2\right] \\ &\le \mathbb E\left[\Lambda^2_{B^H} \Upsilon_X^2 \frac{1}{2^{2-2(\lambda + \lambda^{\prime})}}\left(\sum_{k=0}^{n-1} |\Delta t|^{\lambda + \lambda^{\prime} + 2\varepsilon}\right)^2\right] \to 0 \end{align*}

as $ |\Delta t|\to 0$ . It is now enough to note that each Riemann–Stieltjes sum is in $L^2$ (thanks to the fact that $\mathbb E[\sup_{t\in[0,T]} X^r(t)] \lt \infty$ for all $r>0$ ), so the integral $\int_0^t X^\alpha(s) dB^H(s)$ is indeed well defined as the $L^2$ -limit of Riemann–Stieltjes integral sums.

Step 2. Now we would like to get the representation (5.6). In order to do that, one should follow the proof of the Itô formula in a similar manner as in Subsection 5.1. Namely, for a partition $0 = t_0 \lt t_1 \lt t_2 \lt ... \lt t_n = t$ one can write

\begin{align*} X(t) &= X(0) + \sum_{k=1}^{n} \left( Y^{\frac{1}{1-\alpha}}({t_k}) - Y^{\frac{1}{1-\alpha}}({t_{k-1}})\right) \\ &= X(0) + \frac{1}{1-\alpha} \sum_{k=0}^{n-1} \left( Y^{\frac{\alpha}{1-\alpha}}({t_{k-1}}) (Y({t_k}) - Y({t_k-1}))\right) \\ &\quad + \frac{1}{2} \frac{\alpha}{(1-\alpha)^2} \sum_{k=0}^{n-1} \left(Y^{\frac{2\alpha -1}{1 - \alpha}}({t_{k-1}}) (Y({t_k}) - Y({t_k-1}))^2\right) \\ &\quad+ \frac{1}{6} \frac{\alpha(2\alpha -1)}{(1-\alpha)^3} \sum_{k=1}^n \left( \Theta_{n,k}^{\frac{3\alpha - 2}{1-\alpha}}(Y({t_k}) - Y({t_{k-1}}))^3\right), \end{align*}

where $\Theta_{n,k}$ is a value between $Y({t_{k-1}})$ and $Y({t_{k}})$ .

Note that, using Theorems 2.3 and 2.4, it is easy to check that for any $\lambda^{\prime}\in\left(\frac{1}{3}, \frac{1}{2}\right)$ there exists a random variable $\Upsilon_Y$ having moments of all orders such that

\begin{equation*} |Y(t) - Y(s)| \le \Upsilon_Y |t-s|^{\lambda^{\prime}}. \end{equation*}

Furthermore, by Theorem 2.3 (for $\alpha \in \left[\frac{2}{3}, 1\right)$ ) and Theorem 2.4 (for $\alpha \in \left(\frac{1}{2},\frac{2}{3}\right)$ ), it is clear that there exists a random variable $\Theta \gt0$ that does not depend on the partition and has moments of all orders such that $\Theta_{n,k} \lt \Theta$ , whence

\begin{equation*} \sum_{k=1}^n \left( \Theta_{n,k}^{\frac{3\alpha - 2}{1-\alpha}}(Y({t_k}) - Y({t_{k-1}}))^3\right) \le \Theta^{\frac{3\alpha - 2}{1-\alpha}} \Upsilon_Y^3\sum_{k=1}^n (t_k - t_{k-1})^{3\lambda^{\prime}} \xrightarrow{L^2} 0, \quad |\Delta t| \to 0. \end{equation*}

Using Step 1, it is also straightforward to verify that

\begin{align*} \frac{1}{1-\alpha} &\sum_{k=0}^{n-1} \left( Y^{\frac{\alpha}{1-\alpha}}({t_{k-1}}) (Y({t_k}) - Y({t_{k-1}}))\right) \\ & \xrightarrow{L^2} \frac{1}{1-\alpha}\int_0^t \left( \kappa - \theta X(s) \right)ds + \frac{\nu_1}{1-\alpha} \int_0^t X^\alpha(s) dB(s) \\ & \quad + \frac{\nu_2}{1-\lambda} \int_0^t X^\alpha(s) dB^H(s) \\ & \quad - \frac{\alpha\nu_1^2}{2(1-\alpha)^2} \int_0^t Y^{\frac{2\alpha -1}{1 - \alpha}}(s) ds, \quad |\Delta t| \to 0 \end{align*}

and

\begin{align*} \frac{1}{2} \frac{\alpha}{(1-\alpha)^2} &\sum_{k=0}^{n-1} \left(Y^{\frac{2\alpha -1}{1 - \alpha}}({t_{k-1}}) (Y({t_k}) - Y({t_k-1}))^2\right) \\ &\xrightarrow{L^2} \frac{\alpha\nu_1^2}{2(1-\alpha)^2} \int_0^t Y^{\frac{2\alpha -1}{1 - \alpha}}(s) ds, \quad |\Delta t| \to 0, \end{align*}

which concludes the proof.

6. Simulations

To conclude the work, we illustrate the results presented in this paper with simulations. Details on the approximation scheme used in this section can be found in Appendix A. All the simulations were performed in the R programming language on a system with Intel Core i9-9900K CPU and 64 Gb RAM. In order to simulate values of fractional Brownian motion on a discrete grid, we used the R package somebm utilizing the circulant embedding approach from [Reference Kroese and Botev24, Section 12.4.2].

6.1. Simulation 1: square root of fractional CIR process

As the first example, consider a particular example of the process described in Subsection 5.1, namely the square root of the fractional CIR process:

(6.1) \begin{equation} Y(t) = Y(0) + \frac{1}{2} \int_0^t \left(\frac{\kappa}{Y(s)} - \theta Y(s)\right)ds + \frac{\sigma}{2} B^H(t),\quad t\in[0,T],\end{equation}

where Y(0), $\kappa$ , $\theta$ , and $\sigma$ are positive constants and $B^H$ is a fractional Brownian motion with Hurst index $H>\frac{1}{2}$ . In our simulations, we take $T=1$ , $Y(0) = 1$ , $\kappa = 3$ , $\theta =1$ , $\sigma = 1$ , $H = 0.7$ . Ten sample paths of (6.1) are given in Figure 1.

Figure 1. Ten sample paths of (6.1); $T=1$ , $Y(0) = 1$ , $\kappa = 3$ , $\theta =1$ , $\sigma = 1$ , $H = 0.7$ , $n=20$ .

6.2. Simulation 2: two-sided sandwiched process with equidistant bounds

As the second example, we take

(6.2) \begin{equation} Y(t) = 2.5 + \int_0^t \left(\frac{1}{(Y(s) - \cos(5s))^4} - \frac{1}{(3+ \cos(5s) - Y(s))^4}\right)ds + 3 B^H(t),\quad t\in[0,1],\end{equation}

with

\begin{equation*} \psi(t) - \varphi(t) = 3+ \cos(5t) - \cos(5t) = 3, \quad t\in[0,1].\end{equation*}

Ten sample paths of (6.2) are presented in Figure 2.

Figure 2. Ten sample paths of (6.2).

6.3. Simulation 3: two-sided sandwiched process with shrinking bounds

As our final illustration, we consider

(6.3) \begin{equation} Y(t) = \int_0^t \left(\frac{1}{(Y(s) + e^{-s})^4} - \frac{1}{(e^{-s} - Y(s))^4}\right)ds + B^H(t),\quad t\in[0,1],\end{equation}

with

\begin{equation*} \psi(t) - \varphi(t) = 2e^{-t} \to 0, \quad t\to\infty.\end{equation*}

Ten sample paths of (6.2) are presented in Figure 3.

Figure 3. Ten sample paths of (6.3).

Appendix A. The numerical scheme

In this section, we present the scheme used in Section 6 to simulate the paths of sandwiched processes. One must note that this scheme does not have the virtue of preserving ‘sandwiched-ness’, and it has a worse convergence rate than some alternative schemes (see e.g. [Reference Hong, Huang, Kamrani and Wang22, Reference Zhang and Yuan38] for the case of fractional Brownian motion). On the other hand, it allows for much weaker assumptions on both the drift and the noise and is much simpler from the point of view of implementation.

We first consider the one-sided sandwich case. In addition to (A1)(A4), we will require local Hölder continuity of the drift b with respect to t in the following sense:

  1. (A5) for any $\varepsilon \gt 0$ there exists $c_{\varepsilon} \gt 0$ such that for any (t, y), $(s,y) \in \mathcal D_{\varepsilon}$ ,

    \begin{equation*}|b(t,y) - b(s, y)| \le c_\varepsilon |t-s|^\lambda.\end{equation*}

Obviously, without loss of generality one can assume that the constant $c_\varepsilon$ is the same for Assumptions (A2) and (A5).

We stress that the drift b is not globally Lipschitz, and furthermore, for any $t\in[0,T]$ , the value b(t, y) is not defined for $y \lt \varphi(t)$ . Hence classical Euler approximations applied directly to the equation (0.3) fail, since such a scheme does not guarantee that the discretized version of the process stays above $\varphi$ . A straightforward way to overcome this issue is to discretize not the process Y itself, but its approximation $\widetilde Y^{(n)}$ obtained by ‘leveling’ the singularity in the drift. Namely, fix

\begin{equation*} n_0 \gt \max_{t\in[0,T]}|b(t, \varphi(t) + y_*)|,\end{equation*}

where $y_*$ is from Assumption (A3). For an arbitrary $n \ge n_0$ , define the function $y_n$ : $[0,T] \to \mathcal D_0$ by

\begin{equation*} y_n(t) \,:\!=\, \min\{ y> \varphi(t):\,b(t,y) \lt n \},\end{equation*}

and consider

(A.1) \begin{equation} \widetilde b_n(t, y) \,:\!=\, \begin{cases} b(t, y), &\quad y \ge y_n(t), \\ n, &\quad y \lt y_n(t). \end{cases}\end{equation}

By (A3), $b(t, y) \ge n$ for all $y\in \left(\varphi(t), \varphi(t) + \left(\frac{c}{n}\right)^{\frac{1}{\gamma}}\right)$ ; therefore $y_n(t) \ge \varphi(t) + \left(\frac{c}{n}\right)^{\frac{1}{\gamma}}$ and thus, by (A2),

(A.2) \begin{equation}\begin{gathered} |\widetilde b_n(t,y_1) - \widetilde b_n(t,y_2)| \le c_n|y_1-y_2|, \quad t\in[0,T], \quad y_1,y_2\in \mathbb R, \\ |\widetilde b_n(t_1,y) - \widetilde b_n(t_2,y)| \le c_n|t_1-t_2|^\lambda, \quad t_1,t_2\in[0,T], \quad y\in \mathbb R,\end{gathered}\end{equation}

where $c_n$ denotes the constant from (A2) and (A5) which corresponds to $\varepsilon = \left(\frac{c}{n}\right)^{\frac{1}{\gamma}}$ . In particular, this implies that the SDE

(A.3) \begin{equation} d\widetilde Y^{(n)}(t) = \widetilde b_n(t,\widetilde Y^{(n)}(t))dt + d Z(t), \quad \widetilde Y^{(n)}(0) = Y(0) \gt \varphi(0),\end{equation}

has a unique pathwise solution which can be approximated by the Euler scheme.

Remark A.1. In this section, by C we will denote any positive constant that does not depend on the order of approximation n or the partition, and whose exact value is not important. Note that C may change from line to line (or even within one line).

Regarding the process $\widetilde Y^{(n)}$ , we have the following result.

Proposition A.1. Let Assumptions (A1)(A4) hold. Then, for any $r>0$ , there exists a constant $C>0$ that does not depend on n such that

\begin{equation*} \max_{t\in[0,T]}|\widetilde Y^{(n)}(t)|^r \le C\left(1+\Lambda^r\right). \end{equation*}

Proof. Fix $n \ge n_0$ , take $\varepsilon \gt 0$ , and consider the processes

\begin{equation*} \widetilde Y_\varepsilon (t) = Y(0) + \int_0^t \left(b(s, \widetilde Y_\varepsilon (s)) + \varepsilon \right)ds + Z(t) \end{equation*}

and

\begin{equation*} \widetilde Y^{(n_0)}_\varepsilon (t) = Y(0) + \int_0^t \left(\widetilde b_{n_0}(s, \widetilde Y^{(n_0)}_\varepsilon (s)) - \varepsilon \right)ds + Z(t). \end{equation*}

It is easy to see that there exists $C>0$ that does not depend n such that

\begin{equation*} |\widetilde b_{n_0}(t,y)| \le C(1+ |y|); \end{equation*}

therefore there exists $C>0$ such that

\begin{align*} |\widetilde Y^{(n_0)}_\varepsilon(t)| & \le Y(0) + \varepsilon T + \int_0^t |\widetilde b_{n_0}(s, \widetilde Y^{(n_0)}_\varepsilon(s) )|ds + Z(t) \\ &\le C + C \int_0^t |\widetilde Y^{(n_0)}_\varepsilon(s)|ds + \Lambda T^\lambda. \end{align*}

Hence, by Gronwall’s inequality,

\begin{equation*} \max_{t\in[0,T]}|\widetilde Y^{(n_0)}_\varepsilon(t)| \le C\left(1+\Lambda\right) \end{equation*}

for some constant $C>0$ . Moreover, by Theorem 2.3, there exists $C>0$ such that

\begin{equation*} \max_{t\in[0,T]}|\widetilde Y_\varepsilon(t)| \le C\left(1+\Lambda\right). \end{equation*}

The result now follows from the fact that, by Lemma 1.2,

\begin{equation*} \widetilde Y^{(n_0)}_\varepsilon(t) \le \widetilde Y^{(n)}(t) \le \widetilde Y_\varepsilon(t), \quad t\in [0,T]. \end{equation*}

Before proceeding to the main theorem of the section, let us provide another simple auxiliary proposition.

Proposition A.2. Let Assumptions (A1)(A4) hold. Assume also that the noise Z satisfying Assumptions (Z1)(Z2) is such that

\begin{equation*} \mathbb E \left[ |Z(t) - Z(s)|^p \right] \le C_{ \lambda, p}|t-s|^{\lambda p}, \quad s,t\in[0,T], \end{equation*}

for some positive constant $C_{\lambda, p} \gt 0$ and $p\ge 1$ such that $\lambda_p \,:\!=\, \lambda - \frac{2}{p} \gt \frac{1}{1+\gamma}$ with $\gamma$ from Assumption (A4). Then

\begin{equation*} \mathbb P\left( \min_{t\in[0,T]}(Y(t) - \varphi(t)) \le \varepsilon \right) = O(\varepsilon^{\gamma \lambda_p + \lambda_p - 1}), \quad \varepsilon \to 0. \end{equation*}

Proof. By Lemma 1.1,

\begin{equation*} |Z(t) - Z(s)| \le A_{\lambda,p} |t-s|^{\lambda - \frac{2}{p}} \left(\int_0^T \int_0^T \frac{|Z(x) - Z(y)|^p}{|x-y|^{\lambda p }} dx dy\right)^{\frac{1}{p}}, \end{equation*}

where

\begin{equation*} A_{\lambda, p} = 2^{3 + \frac{2}{p}}\left( \frac{\lambda p }{\lambda p - 2} \right). \end{equation*}

Note that the random variable

\begin{equation*} \Lambda_p \,:\!=\, A_{\lambda,p} \left(\int_0^T \int_0^T \frac{|Z(x) - Z(y)|^p}{|x-y|^{\lambda p }} dx dy\right)^{\frac{1}{p}} \end{equation*}

is finite a.s., since

\begin{align*} \mathbb E \Lambda^p_p &= A^p_{\lambda,p} \int_0^T \int_0^T \frac{\mathbb E |Z(x) - Z(y)|^p}{|x-y|^{\lambda p }} dx dy \\ &\le T^2 A^p_{\lambda,p} C_{\lambda, p} \\ &<\infty. \end{align*}

Now, by applying Theorem 2.4 and Remark 2.4 with respect to the Hölder order $\lambda_p = \lambda-\frac{2}{p}$ , one can deduce that for all $t\in[0,T]$

\begin{equation*} Y(t) - \varphi(t) \ge \frac{1}{M_{3,p}(1,T) \widetilde \Lambda_p^{\frac{1}{\gamma \lambda_p + \lambda_p -1}} }, \end{equation*}

where

\begin{equation*} M_{3,p}(1,T) \,:\!=\, 2^{\frac{\gamma\lambda_p }{\gamma \lambda_p + \lambda_p -1}} \beta^{\frac{1 - \lambda_p}{\gamma \lambda_p + \lambda_p -1}} \gt0, \end{equation*}
\begin{equation*} \beta_p \,:\!=\, \frac{ \lambda_p^{\frac{\lambda_p}{1 - \lambda_p}} - \lambda_p^{\frac{1}{1 - \lambda_p}} }{ c ^{\frac{\lambda_p}{1 - \lambda_p}}} \gt 0, \end{equation*}

and

\begin{equation*} \widetilde \Lambda_p \,:\!=\, \max\left\{ \Lambda_p, K_p, \left(2 \beta_p\right)^{\lambda_p - 1} \left(\frac{(Y(0) - \varphi(0)) \wedge y_*}{2}\right)^{1 - \lambda_p - \gamma\lambda_p} \right\}, \end{equation*}

with $y_*$ , c, and $\gamma$ being from Assumption (A3), and with $K_p$ being such that

\begin{equation*} |\varphi(t) - \varphi(s)| \le K_p|t-s|^{\lambda_p}, \quad s,t\in[0,T]. \end{equation*}

Therefore

\begin{align*} \mathbb P\left( \min_{t\in[0,T]}(Y(t) - \varphi(t)) \le \varepsilon \right) &\le \mathbb P\left( \frac{1}{M_{3,p}(1,T) \widetilde \Lambda_p^{\frac{1}{\gamma \lambda_p + \lambda_p -1}} } \le \varepsilon \right) \\ &= \mathbb P\left( \widetilde \Lambda_p \ge \left(\frac{1}{M_{3,p}(1,T)\varepsilon}\right)^{\gamma \lambda_p + \lambda_p -1} \right) \\ &\le (M_{3,p}(1,T))^{\gamma \lambda_p + \lambda_p -1} \mathbb E[\widetilde \Lambda_p] \varepsilon^{\gamma \lambda_p + \lambda_p -1} \\ &=O(\varepsilon^{\gamma \lambda_p + \lambda_p - 1}), \quad \varepsilon \to 0. \end{align*}

Finally, let $\Delta = \{0=t_0 \lt t_1\lt ...\lt t_N=T\}$ be a uniform partition of [0, T], $t_k = \frac{Tk}{N}$ , $k=0,1,..., N$ , $|\Delta|\,:\!=\,\frac{T}{N}$ . For the given partition, we introduce

(A.4) \begin{equation}\begin{gathered} \tau_-(t) \,:\!=\, \max\{t_k,\,t_k\le t\}, \\ \kappa_-(t) \,:\!=\, \max\{k,\,t_k\le t\}, \\ \tau_+(t) \,:\!=\, \min\{t_k,\,t_k\ge t\}, \\ \kappa_+(t) \,:\!=\, \min\{k,\,t_k \ge t\}.\end{gathered}\end{equation}

For any $n\ge n_0$ , define

(A.5) \begin{equation} \widehat Y^{N, n}(t) \,:\!=\, Y(0) + \int_0^t \widetilde b_n \left(\tau_-(s), \widehat Y^{N, n}_{\tau_-(s)}\right) ds + Z({\tau_-(t)});\end{equation}

i.e.

\begin{equation*} \widehat{Y}^{N,n}(t_{i+1}) = \widehat{Y}^{N,n}(t_{i}) + \widetilde b_n(t_i, \widehat{Y}^{N,n}(t_{i}))(t_{i+1} - t_i) + Z(t_{i+1}) - Z(t_i)\end{equation*}

with linear interpolation between the points of the partition. Recall that for each $n>n_0$ the function $y_n$ : $[0,T] \to \mathcal D_0$ is defined as

\begin{equation*} y_n(t) \,:\!=\, \min\{ y> \varphi(t):\,b(t,y) \le n \},\end{equation*}

and consider

(A.6) \begin{equation} \delta_n \,:\!=\, \sup_{t\in[0,T]} (y_n(t) - \varphi(t)).\end{equation}

Remark A.2. By (A3), it is easy to see that $\varepsilon_n \,:\!=\, \left(\frac{c}{n}\right)^{\frac{1}{\gamma}} \le \delta_n$ . Moreover, $\delta_n \downarrow 0$ as $n\to\infty$ . Indeed, by the definition of $y_n$ , for any fixed $t\in [0,T]$ and $n>n_0$ ,

\begin{equation*} y_n(t) \ge y_{n+1}(t) \end{equation*}

and hence $\delta_n \ge \delta_{n+1}$ . Now, consider an arbitrary $\varepsilon \in(0,y_*)$ and take

\begin{equation*} n_\varepsilon \,:\!=\, [\max_{t\in[0,T]} b(t, \varphi(t) + \varepsilon)], \end{equation*}

with $[\cdot]$ denoting the integer part. Then

\begin{equation*} b(t, \varphi(t) + \varepsilon) \lt n_\varepsilon + 1 \end{equation*}

for all $t\in[0,T]$ . On the other hand, by Assumption (A3),

\begin{equation*} b(t, \varphi(t) + \varepsilon^{\prime}) \ge n_\varepsilon+1 \end{equation*}

for all $\varepsilon^{\prime} \lt \left( \frac{c}{n_\varepsilon+1} \right)^{\frac{1}{\gamma}}$ , which implies that for each $t\in[0,T]$ ,

\begin{equation*} y_{n_\varepsilon + 1}(t) - \varphi(t) \lt \varepsilon, \end{equation*}

i.e. $\delta_{n_\varepsilon + 1} \lt \varepsilon$ . This, together with $\delta_n$ being decreasing, yields that $\delta_n \downarrow 0$ as $n\to\infty$ .

Theorem A.1. Let Assumptions (Z1)(Z2) and (A1)(A5) hold. Assume also that the noise Z is such that

\begin{equation*} \mathbb E \left[ |Z(t) - Z(s)|^p \right] \le C_{ \lambda, p}|t-s|^{\lambda p}, \quad s,t\in[0,T], \end{equation*}

where $p\ge 2$ is such that $\lambda_p \,:\!=\, \lambda - \frac{2}{p} \gt \frac{1}{1+\gamma}$ , $\gamma$ is from (A3), and $C_{\lambda, p}$ is a positive constant. Then

\begin{equation*} \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widehat Y^{N, n}(t)\right| \right] \le C \left(\delta_n^{\frac{\gamma\lambda_p + \lambda_p - 1}{2}} + \frac{(1+c_n)e^{ c_n}}{N^{\lambda_p}}\right), \end{equation*}

where C is some positive constant that does not depend on n or the mesh of the partition $|\Delta| = \frac{T}{N}$ , $\delta_n$ is defined by (A.6), $\delta_n\to 0$ , $n\to\infty$ , and $c_n$ is from (A.2).

Proof. Just as in the proof of Proposition A.2, observe that

\begin{equation*} |Z(t)-Z(s)| \le \Lambda_p|t-s|^{\lambda_p}, \end{equation*}

where

\begin{equation*} \Lambda_p \,:\!=\, A_{\lambda,p} \left(\int_0^T \int_0^T \frac{|Z(x) - Z(y)|^p}{|x-y|^{\lambda p }} dx dy\right)^{\frac{1}{p}}, \end{equation*}

and note that the condition $p\ge 2$ implies that

\begin{equation*} \mathbb E\Lambda_p^2 \le \left(\mathbb E \Lambda^p_p\right)^{\frac{2}{p}} \lt \infty. \end{equation*}

It is clear that

\begin{align*} \mathbb E&\left[ \sup_{t\in[0,T]}\left|Y(t) - \widehat Y^{N, n}(t)\right| \right] \\ & \le \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \right] + \mathbb E\left[ \sup_{t\in[0,T]}\left|\widetilde Y^{(n)}(t) - \widehat Y^{N, n}(t)\right| \right]. \end{align*}

Let us estimate the two terms in the right-hand side of the inequality above separately. Observe that

\begin{equation*} b(t,y) = \widetilde b_n(t,y), \quad (t,y)\in \mathcal D_{\delta_n}, \end{equation*}

with $\delta_n$ defined by (A.6). Consider the set

\begin{equation*} \mathcal A_n \,:\!=\, \{\omega\in\Omega\,|\,\min_{t\in[0,T]}(Y(\omega, t)-\varphi(t)) \gt \delta_n\} \end{equation*}

and note that

\begin{equation*} b(t,Y(t))\unicode{x1D7D9}_{\mathcal A_n} = b_n(t,Y(t))\unicode{x1D7D9}_{\mathcal A_n}, \end{equation*}

i.e., for all $\omega\in \mathcal A_n$ the path $Y(\omega, t)$ satisfies the equation (A.3) and thus coincides with $\widetilde Y^{(n)}(\omega, t)$ . Hence

\begin{align*} \mathbb E& \left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \right] \\ & = \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \unicode{x1D7D9}_{\mathcal A_n}\right] + \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \unicode{x1D7D9}_{\Omega \setminus \mathcal A_n}\right] \\ &= \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \unicode{x1D7D9}_{\Omega \setminus \mathcal A_n}\right] \\ &\le \left(\mathbb E \left[ \left(\sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right|\right)^2 \right]\right)^{\frac{1}{2}} \sqrt{\mathbb P\left( \min_{t\in[0,T]}(Y(t) - \varphi(t)) \gt \delta_n \right)}. \end{align*}

By Theorem 2.3 and Proposition A.1 applied with respect to $\lambda_p = \lambda - \frac{2}{p}$ ,

\begin{align*} \mathbb E &\left[ \left(\sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right|\right)^2 \right] \\ & \le C \left(\mathbb E \left[ \left(\sup_{t\in[0,T]}\left|Y(t)\right|\right)^2 \right] + \mathbb E \left[ \left(\sup_{t\in[0,T]}\left|\widetilde Y^{(n)}(t)\right|\right)^2 \right]\right) \\ &\le C\left( 1+ \mathbb E\Lambda_p^2 \right) \lt \infty, \end{align*}

and by Proposition A.2 there exists a constant $C>0$ such that

\begin{equation*} \sqrt{\mathbb P\left( \min_{t\in[0,T]}(Y(\omega, t)-\varphi(t)) \gt \delta_n \right)} \le C\delta_n^{\frac{\gamma\lambda_p + \lambda_p - 1}{2}}. \end{equation*}

Therefore, there exists a constant $C>0$ that does not depend on n or N such that

(A.7) \begin{equation} \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widetilde Y^{(n)}(t)\right| \right] \le C\delta_n^{\frac{\gamma\lambda_p + \lambda_p - 1}{2}}. \end{equation}

Next, taking into account (A.2), for any $t\in[0,T]$ we can write

\begin{align*} \left|\widetilde Y^{(n)}(t) - \widehat Y^{N, n}(t)\right| & \le \int_0^t \left| \widetilde b_n (s, \widetilde Y^{(n)}(s)) - \widetilde b_n \left(\tau_-(s), \widetilde Y^{(n)}(s) \right)\right| ds \\ &\qquad+ \int_0^t \left| \widetilde b_n (\tau_-(s), \widetilde Y^{(n)}(s)) - \widetilde b_n \left(\tau_-(s), \widehat Y^{N, n}_{\tau_-(s)}\right)\right| ds \\ &\qquad + \Lambda_p |\Delta|^{\lambda_p} \\ &\le c_n T^{p/2} |\Delta|^{\lambda_p} + c_n T^{p/2} \int_0^t \left|\widetilde Y^{(n)}(s) - \widehat Y^{N, n}(s)\right| ds + \Lambda_p |\Delta|^{\lambda_p}, \end{align*}

whence, since $\mathbb E\Lambda_p \lt \infty$ ,

\begin{align*} \mathbb E&\left[ \sup_{s\in[0,t]} \left|\widetilde Y^{(n)}(t) - \widehat Y^{N, n}(t)\right| \right] \\ &\le c_n T^{p/2} |\Delta|^{\lambda_p} + c_n T^{p/2} \int_0^t \mathbb E\left[\sup_{u\in[0,s]}\left|\widetilde Y^{(n)}(u) - \widehat Y^{N, n}(u)\right|\right] ds + C |\Delta|^{\lambda_p}, \end{align*}

and, by Gronwall’s inequality, there exists a constant $C>0$ such that

\begin{equation*} \begin{aligned} \mathbb E\left[ \sup_{t\in[0,T]}\left|\widetilde Y^{(n)}(t) - \widehat Y^{N, n}(t)\right| \right] \le \frac{C(1+c_n)e^{c_n}}{N^{\lambda_p}}. \end{aligned} \end{equation*}

This, together with (A.7), completes the proof.

Remark A.3. The processes from Examples 1.1, 1.2, and 1.3 satisfy the conditions of Theorem A.1.

The two-sided sandwich case presented in Section 4 can be treated in the same manner. Instead of Assumption (A5), one should use the following:

  1. (B5) for any $\varepsilon_1, \varepsilon_2 \gt 0$ , $\varepsilon_1 + \varepsilon_2 \le \lVert \varphi - \psi\rVert_\infty$ , there is a constant $c_{\varepsilon_1, \varepsilon_2} \gt 0$ such that for any (t, y), $(s,y) \in \mathcal D_{\varepsilon_1, \varepsilon_2}$ ,

    \begin{equation*} |b(t,y) - b(s, y)| \le c_{\varepsilon_1, \varepsilon_2} |t-s|^\lambda, \end{equation*}

where $\mathcal D_{\varepsilon_1, \varepsilon_2}$ is defined by (4.1). Namely, let

\begin{equation*} n_0 \gt \max\left\{\max_{t\in[0,T]}|b(t, \varphi(t) + y_*)|, \max_{t\in[0,T]}|b(t, \psi(t) - y_*)|\right\},\end{equation*}

where $y_*$ is from Assumption (B3). For an arbitrary $n \ge n_0$ define

\begin{align*} y^\varphi_n(t) &\,:\!=\, \min\{ y\in (\varphi(t), \psi(t)):\,b(t,y) \lt n \}, \\ y^\psi_n(t) &\,:\!=\, \max\{ y \in (\varphi(t), \psi(t)):\,b(t,y) \gt -n \},\end{align*}

and consider the functions $\widetilde b_n$ : $[0,T]\times\mathbb R \to \mathbb R$ of the form

(A.8) \begin{equation}\widetilde b_n(t, y) \,:\!=\, \begin{cases} b(t, y), & \quad y^\varphi_n(t) \le y \le y^\psi_n(t), \\ n, & \quad y \lt y^\varphi_n(t), \\ -n, &\quad y \gt y^\psi_n(t).\end{cases}\end{equation}

Observe that, just as in the one-sided case,

(A.9) \begin{equation}\begin{gathered} |\widetilde b_n(t,y_1) - \widetilde b_n(t,y_2)| \le c_n|y_1-y_2|, \quad t\in[0,T], \quad y_1,y_2\in \mathbb R, \\ |\widetilde b_n(t_1,y) - \widetilde b_n(t_2,y)| \le c_n|t_1-t_2|^\lambda, \quad t_1,t_2\in[0,T], \quad y\in \mathbb R,\end{gathered}\end{equation}

where $c_n$ denotes the constant from Assumptions (B2) and (B5) which corresponds to $\varepsilon = \left(\frac{c}{n}\right)^{\frac{1}{\gamma}}$ . In particular, this implies that the SDE

(A.10) \begin{equation} d\widetilde Y^{(n)}(t) = \widetilde b_n(t,\widetilde Y^{(n)}(t))dt + d Z(t), \quad \widetilde Y^{(n)}_0 = Y(0) \gt \varphi(0),\end{equation}

has a unique pathwise solution and, just as in the one-sided case, can be simulated via the standard Euler scheme:

(A.11) \begin{equation} \widehat Y^{N, n}(t) \,:\!=\, Y(0) + \int_0^t \widetilde b_n \left(\tau_-(s), \widehat Y^{N, n}_{\tau_-(s)}\right) ds + Z({\tau_-(t)}).\end{equation}

Now, define

(A.12) \begin{equation} \delta_n \,:\!=\, \max\left\{\sup_{t\in[0,T]} (y^\varphi_n(t) - \varphi(t)), \sup_{t\in[0,T]} (\psi(t) - y^\psi_n(t))\right\}\end{equation}

and note that $\delta_n \to 0$ , $n\to \infty$ , just as in the one-sided case.

Now we are ready to formulate the two-sided counterpart of Theorem A.1.

Theorem A.2. Let Assumptions (Z1)(Z2) and (B1)(B5) hold. Assume also that the noise Z is such that

\begin{equation*} \mathbb E \left[ |Z(t) - Z(s)|^p \right] \le C_{ \lambda, p}|t-s|^{\lambda p}, \quad s,t\in[0,T], \end{equation*}

where $p\ge 2$ is such that $\lambda_p \,:\!=\, \lambda - \frac{2}{p} \gt \frac{1}{1+\gamma}$ , $\gamma$ is from (A3), and $C_{\lambda, p}$ is a positive constant. Then

\begin{equation*} \mathbb E\left[ \sup_{t\in[0,T]}\left|Y(t) - \widehat Y^{N, n}(t)\right| \right] \le C \left(\delta_n^{\frac{\gamma\lambda_p + \lambda_p - 1}{2}} + \frac{(1+c_n)e^{ c_n}}{N^{\lambda_p}}\right), \end{equation*}

where C is some positive constant that does not depend on n or the mesh of the partition $|\Delta| = \frac{T}{N}$ , $\delta_n$ is defined by (A.12), $\delta_n\to 0$ , $n\to\infty$ , and $c_n$ is from (A.9).

Remark A.4. Theorems A.1 and A.2 guarantee convergence for all $\lambda \in (0,1)$ , but in practice the scheme performs much better for $\lambda$ close to 1. The reason is as follows: in order to make $\delta_n^{\frac{\gamma\lambda_p + \lambda_p - 1}{2}}$ small, one has to consider large values of n; this results in larger values of $(1+c_n)e^{ c_n}$ that, in turn, have to be ‘compensated’ by the denominator $N^{\lambda_p}$ . The bigger $\lambda_p$ is, the smaller the values of n (and hence of N) can be.

Funding information

The present research was carried out within the framework and with the support of the Research Council of Norway’s ToppForsk project no. 274410, entitled ‘STORM: Stochastics for Time-Space Risk Models’. The second author is supported by the Ukrainian research project ‘Exact formulae, estimates, asymptotic properties and statistical analysis of complex evolutionary systems with many degrees of freedom’ (state registration number 0119U100317), as well as by the Japan Science and Technology Agency’s CREST JPMJCR14D7, CREST Project reference number JPMJCR2115, and by the Swedish Foundation for Strategic Research, grant no. UKR22-0017.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Alfi, V., Coccetti, F., Petri, A. and Pietronero, L. (2007). Roughness and finite size effect in the NYSE stock-price fluctuations. Europ. Phys. J. B 55, 135142.CrossRefGoogle Scholar
Andersen, L. B. G. and Piterbarg, V. V. (2006). Moment explosions in stochastic volatility models. Finance Stoch. 11, 2950.CrossRefGoogle Scholar
Anh, V. and Inoue, A. (2005). Financial markets with memory I: dynamic models. Stoch. Anal. Appl. 23, 275300.CrossRefGoogle Scholar
Ayache, A. and Peng, Q. (2012). Stochastic volatility and multifractional Brownian motion. In Stochastic Differential Equations and Processes, Springer, Berlin, Heidelberg, pp. 211237.CrossRefGoogle Scholar
Azmoodeh, E., Sottinen, T., Viitasaari, L. and Yazigi, A. (2014). Necessary and sufficient conditions for Hölder continuity of Gaussian processes. Statist. Prob. Lett. 94, 230235.CrossRefGoogle Scholar
Beran, J. (1994). Statistics for Long-Memory Processes. Chapman and Hall/CRC, Philadelphia, PA.Google Scholar
Boguslavskaya, E., Mishura, Y. and Shevchenko, G. (2018). Replication of Wiener-transformable stochastic processes with application to financial markets with memory. In Stochastic Processes and Applications, eds Silvestrov, S., Malyarenko, A. and Rančić, M., Springer, Cham, pp. 335361.CrossRefGoogle Scholar
Bollerslev, T. and Mikkelsen, H. O. (1996). Modeling and pricing long memory in stock market volatility. J. Econometrics 73, 151184.CrossRefGoogle Scholar
Chronopoulou, A. and Viens, F. G. (2010). Estimation and pricing under long-memory stochastic volatility. Ann. Finance 8, 379403.CrossRefGoogle Scholar
Comte, F., Coutin, L. and Renault, E. (2010). Affine fractional stochastic volatility models. Ann. Finance 8, 337378.CrossRefGoogle Scholar
Cox, J. C. (1996). The constant elasticity of variance option pricing model. J. Portfolio Manag. 23, 1517.CrossRefGoogle Scholar
Cox, J. C., Ingersoll, J. E. and Ross, S. A. (1981). A re-examination of traditional hypotheses about the term structure of interest rates. J. Finance 36, 769799.CrossRefGoogle Scholar
Cox, J. C., Ingersoll, J. E. and Ross, S. A. (1985). An intertemporal general equilibrium model of asset prices. Econometrica 53, 363384.CrossRefGoogle Scholar
Cox, J. C., Ingersoll, J. E. and Ross, S. A. (1985). A theory of the term structure of interest rates. Econometrica 53, 385407.CrossRefGoogle Scholar
Ding, Z., Granger, C. W. and Engle, R. F. (1993). A long memory property of stock market returns and a new model. J. Empirical Finance 1, 83106.CrossRefGoogle Scholar
Domingo, D., d’Onofrio, A. and Flandoli, F. (2019). Properties of bounded stochastic processes employed in biophysics. Stoch. Anal. Appl. 38, 277306.CrossRefGoogle Scholar
D’Onofrio, A. (ed.) (2013). Bounded Noises in Physics, Biology, and Engineering. Springer, New York.CrossRefGoogle Scholar
Friz, P. K. and Hairer, M. (2014). A Course on Rough Paths. Springer, Cham.CrossRefGoogle Scholar
Friz, P. K. and Victoir, N. B. (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge University Press.CrossRefGoogle Scholar
Garsia, A., Rodemich, E. and Rumsey, H. (1970). A real variable lemma and the continuity of paths of some Gaussian processes. Indiana Univ. Math. J. 20, 565578.CrossRefGoogle Scholar
Gatheral, J., Jaisson, T. and Rosenbaum, M. (2018). Volatility is rough. Quant. Finance 18, 933949.CrossRefGoogle Scholar
Hong, J., Huang, C., Kamrani, M. and Wang, X. (2020). Optimal strong convergence rate of a backward Euler type scheme for the Cox–Ingersoll–Ross model driven by fractional Brownian motion. Stoch. Process. Appl. 130, 26752692.CrossRefGoogle Scholar
Hu, Y., Nualart, D. and Song, X. (2008). A singular stochastic differential equation driven by fractional Brownian motion. Statist. Prob. Lett. 78, 20752085.CrossRefGoogle Scholar
Kroese, D. P. and Botev, Z. I. (2015). Spatial process simulation. In Stochastic Geometry, Spatial Statistics and Random Fields, Springer, Cham, pp. 369404.CrossRefGoogle Scholar
Merino, R. et al. (2021). Decomposition formula for rough Volterra stochastic volatility models. Internat. J. Theoret. Appl. Finance 24, article no. 2150008.CrossRefGoogle Scholar
Mishura, Y. and Yurchenko-Tytarenko, A. (2018). Fractional Cox–Ingersoll–Ross process with non-zero ‘mean’. Modern Stoch. Theory Appl. 5, 99111.CrossRefGoogle Scholar
Mishura, Y. and Yurchenko-Tytarenko, A. (2018). Fractional Cox–Ingersoll–Ross process with small Hurst indices. Modern Stoch. Theory Appl. 6, 1339.Google Scholar
Mishura, Y. and Yurchenko-Tytarenko, A. (2020). Approximating expected value of an option with non-Lipschitz payoff in fractional Heston-type model. Internat. J. Theoret. Appl. Finance 23, article no. 2050031.CrossRefGoogle Scholar
Mishura, Y. and Yurchenko-Tytarenko, A. (2022). Standard and fractional reflected Ornstein–Uhlenbeck processes as the limits of square roots of Cox–Ingersoll–Ross processes. Stochastics.Google Scholar
Nourdin, I. (2012). Selected Aspects of Fractional Brownian Motion. Springer, Milan.CrossRefGoogle Scholar
Nualart, D. and Rascanu, A. (2002). Differential equations driven by fractional Brownian motion. Collectanea Math. 53, 5581.Google Scholar
Samorodnitsky, G. (2016). Stochastic Processes and Long Range Dependence. Springer, Basel.CrossRefGoogle Scholar
Skorokhod, A. V. (1961). Stochastic equations for diffusion processes in a bounded region. Theory Prob. Appl. 6, 264274.CrossRefGoogle Scholar
Skorokhod, A. V. (1962). Stochastic equations for diffusion processes in a bounded region. II. Theory Prob. Appl. 7, 323.CrossRefGoogle Scholar
Tarasov, V. (2019). On history of mathematical economics: application of fractional calculus. Mathematics 7, article no. 509.CrossRefGoogle Scholar
Yamasaki, K. et al. (2005). Scaling and memory in volatility return intervals in financial markets. Proc. Nat. Acad. Sci. USA 102, 94249428.CrossRefGoogle Scholar
Zähle, M. (1998). Integration with respect to fractal functions and stochastic calculus. I. Prob. Theory Relat. Fields 111, 333374.CrossRefGoogle Scholar
Zhang, S.-Q. and Yuan, C. (2020). Stochastic differential equations driven by fractional Brownian motion with locally Lipschitz drift and their implicit Euler approximation. Proc. R. Soc. Edinburgh A 151, 12781304.CrossRefGoogle Scholar
Figure 0

Figure 1. Ten sample paths of (6.1); $T=1$, $Y(0) = 1$, $\kappa = 3$, $\theta =1$, $\sigma = 1$, $H = 0.7$, $n=20$.

Figure 1

Figure 2. Ten sample paths of (6.2).

Figure 2

Figure 3. Ten sample paths of (6.3).