To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we derive identities for the upward and downward exit problems and resolvents for a process whose motion changes between two Lévy processes if it is above (or below) a barrier b and coincides with a Poissonian arrival time. This can be expressed in the form of a (hybrid) stochastic differential equation, for which the existence of its solution is also discussed. All identities are given in terms of new generalisations of scale functions (counterparts of the scale functions from the theory of Lévy processes). To illustrate the applicability of our results, the probability of ruin is obtained for a risk process with delays in the dividend payments.
We study the probability that an AR(1) Markov chain $X_{n+1}=aX_n+\xi _{n+1}$, where $a\in (0,1)$ is a constant, stays non-negative for a long time. We find the exact asymptotics of this probability and the weak limit of $X_n$ conditioned to stay non-negative, assuming that the independent and identically distributed innovations $\xi _n$ take only two values $\pm 1$ and $a \le \tfrac 23$. This limiting distribution is quasi-stationary. It has no atoms and is singular with respect to the Lebesgue measure when $\tfrac 12< a \le \tfrac 23$, except for the case when $a=\tfrac 23$ and $\mathbb P(\xi _n=1)=\tfrac 12$, where this distribution is uniform on the interval $[0,3]$. This is similar to the properties of Bernoulli convolutions. For $0 < a \le \tfrac 12$, the situation is much simpler and the limiting distribution is a $\delta $-measure. To prove these results, we uncover a close connection between $X_n$ killed at exiting $[0, \infty )$ and the classical dynamical system defined by the piecewise linear mapping $x \mapsto x/a + 1/2\ \pmod 1$. Namely, the trajectory of this system started at $X_n$ deterministically recovers the values of the killed chain in reversed time. We use this fact to construct a suitable Banach space, where the transition operator of the killed chain has the compactness properties that allow us to apply a conventional argument of the Perron–Frobenius type.
In this paper we propose a new efficient algorithm to compute the value function for zero-sum stopping games featuring two players with opposing interests. This can be seen as a game version of the ‘forward algorithm’ for (one-player) optimal stopping problems, first introduced by Irle (2006) for discrete-time Markov chains and later revisited by Miclo and Villeneuve (2021) for continuous-time Markov processes on general state spaces. This paper focuses on a game driven by a homogeneous continuous-time Markov chain taking values in a finite state space and also discusses the number of iterations needed. Illustrated computational implementations for a few particular examples are also provided.
We study a stochastic control problem where the underlying process follows a spectrally negative Lévy process. A controller can continuously increase the process but only decrease it at independent Poisson arrival times. We show the optimality of the periodic–classical barrier strategy, which increases the process whenever it would fall below some lower barrier and decreases it whenever it is observed above a higher barrier. An optimal strategy and the value function are written semi-explicitly using scale functions. Numerical results are also given.
Fractional Brownian motion, with its long-time correlated increments, has been applied in many fields in recent years. Since volatility was shown to be rough by Gatheral, Jaisson, and Rosenbaum, fractional Brownian motion has gained popularity as a financial model. In this work, we revisit the definitions and properties of the univariate and multivariate fractional Brownian motions, and consider four simulation methods. We demonstrate the issues associated with applying the standard Euler scheme for simulating stochastic processes driven by fractional Brownian motion with $H < \frac{1}{2}$ (which we call the rough models). We then introduce a novel approximate method for simulating such rough models based on the fast algorithm by Ma and Wu, which accounts for a factor of 10 speedup. Finally, we consider applications of these methods to option pricing.
We consider a Lévy process reflected at the origin with additional independent and identically distributed collapses that occur at Poisson epochs, where a collapse is a jump downward to a state which is a random fraction of the state just before the jump. We first study the general case, then specialize to the case where the Lévy process is spectrally positive, and, finally, we specialize further to the two cases where the Lévy process is a Brownian motion and a compound Poisson process with exponential jumps minus a linear slope.
In this paper we propose a refracted skew Brownian motion as a risk model with endogenous regime switching, which generalizes the refracted diffusion risk process introduced by Gerber and Shiu. We consider an optimal dividend problem for the refracted skew Brownian risk model and identify sufficient conditions, respectively, for barrier strategy, band strategy, and their variants to be optimal.
Consider a random walk in a time-inhomogeneous random environment. When the environment is stationary and ergodic, we identify a quenched harmonic function for almost every realization of the environment. This function allows us to define a random walk in a random environment conditioned to stay positive, using Doob’s h-transform.
We consider d-dimensional stochastic differential equations (SDEs) of the form $\textrm{d}U_t = b(U_t)\,\textrm{d}t + \sigma\,\textrm{d}Z_t$. Let $X_t$ denote the solution if the driving noise $Z_t$ is a d-dimensional rotationally symmetric $\alpha$-stable process ($1\lt \alpha\lt 2$), and let $Y_t$ be the solution if the driving noise is a d-dimensional Brownian motion. Continuing the work started in Deng et al. (2025), we derive an estimate of the total variation distance $\|\textrm{law}(X_{t})-\textrm{law}(Y_{t})\|_\textrm{TV}$ for all $t \gt 0$, and we show that the ergodic measures $\mu_\alpha$ and $\mu_2$ of $X_t$ and $Y_t$, respectively, satisfy $\|\mu_\alpha-\mu_2\|_\textrm{TV} \leq {Cd\log(1+d)}(2-\alpha)/({\alpha-1})$. We show that this bound is optimal with respect to $\alpha$ by an Ornstein–Uhlenbeck SDE. Combining this bound with a recent interpolation result from Huang et al. (2023), we can derive a bound in the Wasserstein-p distance ($0 \lt p \lt 1$): $\|\mu_\alpha-\mu_2\|_{W_p} \leq {Cd^{(p+3)/2}\log(1+d)}(2-\alpha)/{\alpha-1}$.
This paper derives explicit expressions for drawdown-based two-sided exit identities involving the overshoots and undershoots at the exit times under Poisson observation times for spectrally negative Lévy risk processes by using fluctuation theory. All resulting Laplace transforms of the risk quantities of interest are expressed in terms of the scale functions of the spectrally negative Lévy processes.
Distributed ledgers, including blockchain and other decentralized databases, are designed to store information online where all trusted network members can update the data with transparency. The dynamics of a ledger’s development can be mathematically represented by a directed acyclic graph (DAG). In this paper, we study a DAG model that considers batch arrivals and random delay of attachment. We analyze the asymptotic behavior of this model by letting the arrival rate go to infinity and the inter-arrival time go to zero. We establish that the number of leaves in the DAG, as well as various random variables characterizing the vertices in the DAG, can be approximated by its fluid limit, represented as the solution to a set of delayed partial differential equations. Furthermore, we establish the stable state of this fluid limit and validate our findings through simulations.
We consider shock models governed by the bivariate geometric counting process. By assuming the competing risks framework, failures are due to one of two mutually exclusive causes (shocks). We obtain and study some relevant functions, such as failure densities, survival functions, probability of the cause of failure, and moments of the failure time conditioned on a specific cause. Such functions are specified by assuming that systems or living organisms fail at the first instant in which a random threshold is reached by the sum of received shocks. Under this failure scheme, various cases arising for suitable choices of the random threshold are provided too.
Balister, the second author, Groenland, Johnston, and Scott recently showed that there are asymptotically $C4^n/n^{3/4}$ many unordered sequences that occur as degree sequences of graphs with $n$ vertices. Combining limit theory for infinitely divisible distributions with a new connection between a class of random walk trajectories and a subset counting formula from additive number theory, we describe $C$ in terms of Walkup’s number of rooted plane trees. The bijection is related to an instance of the Lévy–Khintchine formula. Our main result complements a result of Stanley, that ordered graphical sequences are related to quasi-forests.
We investigate the EM approximation for $\mathbb{R}^d$-valued ergodic stochastic differential equations (SDEs) driven by rotationally invariant $\alpha$-stable processes ($\alpha\in(1,2)$) with Markovian switching. The coefficient g violates the dissipative condition for certain states of the switching process. Using the Lindeberg principle, we establish quantitative error bounds between the original process $(X_t,R_t)_{t\geqslant 0}$ and its Euler–Maruyama (EM) scheme under a specially designed metric. Furthermore, we derive both a central limit theorem and a moderate derivation principle for the empirical measures of both the SDE and its EM scheme. The theoretical results are subsequently validated through a concrete example.
We consider a generalization of the forest fire model on $\mathbb{Z}_+$ with ignition at zero only, studied by Volkov (2009 ALEA6, 399–414). Unlike that model, we allow delays in the spread of the fires and the non-zero burning time of individual ‘trees’. We obtain some general properties for this model, which cover, among others, the phenomenon of an ‘infinite fire’, not present in the original model.
The marked Hawkes risk process is a compound point process where the occurrence and amplitude of past events impact the future. Since data in real life are acquired over a discrete time grid, we propose a strong discrete-time approximation of the continuous-time risk process obtained by embedding from the same Poisson measure. We then prove trajectorial convergence results in both fractional Sobolev spaces and the Skorokhod space, hence extending the theorems proven in Huang and Khabou ((2023). Stoch. Process. Appl.161, 201–241) and Kirchner ((2016). Stoch. Process. Appl.126(8), 2494–2525). We also provide upper bounds on the convergence speed with explicit dependence on the size of the discretization step, the time horizon, and the regularity of the kernel.
In this paper, we solve an exit probability game between two players, each of whom controls a linear diffusion process. One player controls its process to minimize the probability that the difference of the processes reaches a low level before it reaches a high level, while the other player aims to maximize the probability. By solving the Bellman–Isaacs equations, we find the sub-value and sup-value functions of the game in explicit forms, which are twice continuously differentiable. The optimal plays associated with the sub-value and sup-value are also found explicitly.
Following the pivotal work of Sevastyanov (1957), who considered branching processes with homogeneous Poisson immigration, much has been done to understand the behaviour of such processes under different types of branching and immigration mechanisms. Recently, the case where the times of immigration are generated by a non-homogeneous Poisson process has been considered in depth. In this work, we demonstrate how we can use the framework of point processes in order to go beyond the Poisson process. As an illustration, we show how to transfer techniques from the case of Poisson immigration to the case where it is spanned by a determinantal point process.
Hybrid stochastic differential equations (SDEs) are a useful tool for modeling continuously varying stochastic systems modulated by a random environment, which may depend on the system state itself. In this paper we establish the pathwise convergence of solutions to hybrid SDEs using space-grid discretizations. Though time-grid discretizations are a classical approach for simulation purposes, our space-grid discretization provides a link with multi-regime Markov-modulated Brownian motions. This connection allows us to explore aspects that have been largely unexplored in the hybrid SDE literature. Specifically, we exploit our convergence result to obtain efficient and computationally tractable approximations for first-passage probabilities and expected occupation times of the solutions to hybrid SDEs. Lastly, we illustrate the effectiveness of the resulting approximations through numerical examples.