To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We deal with the filtering problem of a general jump diffusion process, X, when the observation process, Y, is a correlated jump diffusion process having common jump times with X. In this setting, at any time t the σ-algebra provides all the available information about Xt, and the central goal is to characterize the filter, πt, which is the conditional distribution of Xt given observations . To this end, we prove that πt solves the Kushner-Stratonovich equation and, by applying the filtered martingale problem approach (see Kurtz and Ocone (1988)), that it is the unique weak solution to this equation. Under an additional hypothesis, we also provide a pathwise uniqueness result.
In this paper we show that fractional Brownian motion with H < ½ can arise as a limit of a simple class of traffic processes that we call ‘scheduled traffic models’. To our knowledge, this paper provides the first simple traffic model leading to fractional Brownnian motion with H < ½. We also discuss some immediate implications of this result for queues fed by scheduled traffic, including a heavy-traffic limit theorem.
Let (X, J) denote a Markov-modulated Brownian motion (MMBM) and denote its supremum process by S. For some a > 0, let σ(a) denote the time when the reflected process Y := S - X first surpasses the level a. Furthermore, let σ−(a) denote the last time before σ(a) when X attains its current supremum. In this paper we shall derive the joint distribution of Sσ(a), σ−(a), and σ(a), where the latter two will be given in terms of their Laplace transforms. We also provide some remarks on scale matrices for MMBMs with strictly positive variation parameters. This extends recent results for spectrally negative Lévy processes to MMBMs. Due to well-known fluid embedding and state-dependent killing techniques, the analysis applies to Markov additive processes with phase-type jumps as well. The result is of interest to applications such as the dividend problem in insurance mathematics and the buffer overflow problem in queueing theory. Examples will be given for the former.
In a Galton-Watson branching process that is not extinct by the nth generation and has at least two individuals, pick two individuals at random by simple random sampling without replacement. Trace their lines of descent back in time till they meet. Call that generation Xn a pairwise coalescence time. Similarly, let Yndenote the coalescence time for the whole population of the nth generation conditioned on the event that it is not extinct. In this paper the distributions of Xn and Yn, and their limit behaviors as n → ∞ are discussed for both the critical and subcritical cases.
In this paper we propose a class of financial market models which are based on telegraph processes with alternating tendencies and jumps. It is assumed that the jumps have random sizes and that they occur when the tendencies are switching. These models are typically incomplete, but the set of equivalent martingale measures can be described in detail. We provide additional suggestions which permit arbitrage-free option prices as well as hedging strategies to be obtained.
This paper is devoted to the perfect simulation of a stationary process with an at most countable state space. The process is specified through a kernel, prescribing the probability of the next state conditional to the whole past history. We follow the seminal work of Comets, Fernández and Ferrari (2002), who gave sufficient conditions for the construction of a perfect simulation algorithm. We define backward coalescence times for these kind of processes, which allow us to construct perfect simulation algorithms under weaker conditions than in Comets, Fernández and Ferrari (2002). We discuss how to construct backward coalescence times (i) by means of information depths, taking into account some a priori knowledge about the histories that occur; and (ii) by identifying suitable coalescing events.
In this paper we consider optimal stopping problems for a general class of reward functions under matrix-exponential jump-diffusion processes. Given an American call-type reward function in this class, following the averaging problem approach (see, for example, Alili and Kyprianou (2005), Kyprianou and Surya (2005), Novikov and Shiryaev (2007), and Surya (2007)), we give an explicit formula for solutions of the corresponding averaging problem. Based on this explicit formula, we obtain the optimal level and the value function for American call-type optimal stopping problems.
A widely used model of carcinogenesis assumes that cells must go through a process of acquiring several mutations before they become cancerous. This implies that at any time there will be several populations of cells at different stages of mutation. In this paper we give exact expressions for the expectations and variances of the number of cells in each stage of such a stochastic multistage cancer model.
A positive recurrent, aperiodic Markov chain is said to be long-range dependent (LRD) when the indicator function of a particular state is LRD. This happens if and only if the return time distribution for that state has infinite variance. We investigate the question of whether other instantaneous functions of the Markov chain also inherit this property. We provide conditions under which the function has the same degree of long-range dependence as the chain itself. We illustrate our results through three examples in diverse fields: queueing networks, source compression, and finance.
A discrete-time SIS model is presented that allows individuals in the population to vary in terms of their susceptibility to infection and their rate of recovery. This model is a generalisation of the metapopulation model presented in McVinish and Pollett (2010). The main result of the paper is a central limit theorem showing that fluctuations in the proportion of infected individuals around the limiting proportion converges to a Gaussian random variable when appropriately rescaled. In contrast to the case where there is no variation amongst individuals, the limiting Gaussian distribution has a nonzero mean.
Let X = {Xt: t ≥ 0} be a stationary piecewise continuous Rd-valued process that moves between jumps along the integral curves of a given continuous vector field, and let S ⊂ Rd be a smooth surface. The aim of this paper is to derive a multivariate version of Rice's formula, relating the intensity of the point process of (localized) continuous crossings of S by X to the distribution of X0. Our result is illustrated by examples relating to queueing networks and stress release network models.
Many natural populations are well modelled through time-inhomogeneous stochastic processes. Such processes have been analysed in the physical sciences using a method based on Lie algebras, but this methodology is not widely used for models with ecological, medical, and social applications. In this paper we present the Lie algebraic method, and apply it to three biologically well-motivated examples. The result of this is a solution form that is often highly computationally advantageous.
A correspondence formula between the laws of dual Markov chains on Z with two transition jumps is established. This formula contributes to the study of random walks in stationary random environments. Counterexamples with more than two jumps are exhibited.
We extend Goldie's (1991) implicit renewal theorem to enable the analysis of recursions on weighted branching trees. We illustrate the developed method by deriving the power-tail asymptotics of the distributions of the solutions R to and similar recursions, where (Q, N, C1, C2,…) is a nonnegative random vector with N ∈ {0, 1, 2, 3,…} ∪ {∞}, and are independent and identically distributed copies of R, independent of (Q, N, C1, C2,…); here ‘∨’ denotes the maximum operator.
A stochastic ordering approach is applied with Stein's method for approximation by the equilibrium distribution of a birth-death process. The usual stochastic order and the more general s-convex orders are discussed. Attention is focused on Poisson and translated Poisson approximations of a sum of dependent Bernoulli random variables, for example, k-runs in independent and identically distributed Bernoulli trials. Other applications include approximation by polynomial birth-death distributions.
In this paper we determine the distributions of occupation times of a Markov-modulated Brownian motion (MMBM) in separate intervals before a first passage time or an exit from an interval. We derive the distributions in terms of their Laplace transforms, and we also distinguish between occupation times in different phases. For MMBMs with strictly positive variation parameters, we further propose scale functions.
Motivated by the study of the asymptotic normality of the least-squares estimator in the (autoregressive) AR(1) model under possibly infinite variance, in this paper we investigate a self-normalized central limit theorem for Markov random walks. That is, let {Xn, n ≥ 0} be a Markov chain on a general state space X with transition probability P and invariant measure π. Suppose that an additive component Sn takes values on the real line , and is adjoined to the chain such that {Sn, n ≥ 1} is a Markov random walk. Assume that Sn = ∑k=1nξk, and that {ξn, n ≥ 1} is a nondegenerate and stationary sequence under π that belongs to the domain of attraction of the normal law with zero mean and possibly infinite variance. By making use of an asymptotic variance formula of Sn / √n, we prove a self-normalized central limit theorem for Sn under some regularity conditions. An essential idea in our proof is to bound the covariance of the Markov random walk via a sequence of weight functions, which plays a crucial role in determining the moment condition and dependence structure of the Markov random walk. As illustrations, we apply our results to the finite-state Markov chain, the AR(1) model, and the linear state space model.
Continuous-time discrete-state random Markov chains generated by a random linear differential equation with a random tridiagonal matrix are shown to have a random attractor consisting of singleton subsets, essentially a random path, in the simplex of probability vectors. The proof uses comparison theorems for Carathéodory random differential equations and the fact that the linear cocycle generated by the Markov chain is a uniformly contractive mapping of the positive cone into itself with respect to the Hilbert projective metric. It does not involve probabilistic properties of the sample path and is thus equally valid in the nonautonomous deterministic context of Markov chains with, say, periodically varying transition probabilities, in which case the attractor is a periodic path.
This paper considers singular systems that involve both continuous dynamics and discrete events with the coefficients being modulated by a continuous-time Markov chain. The underlying systems have two distinct characteristics. First, the systems are singular, that is, characterized by a singular coefficient matrix. Second, the Markov chain of the modulating force has a large state space. We focus on stability of such hybrid singular systems. To carry out the analysis, we use a two-time-scale formulation, which is based on the rationale that, in a large-scale system, not all components or subsystems change at the same speed. To highlight the different rates of variation, we introduce a small parameter ε>0. Under suitable conditions, the system has a limit. We then use a perturbed Lyapunov function argument to show that if the limit system is stable then so is the original system in a suitable sense for ε small enough. This result presents a perspective on reduction of complexity from a stability point of view.
We study the smooth-fit property of the American put price with finite maturity in an exponential Lévy model when the underlying stock pays dividends at a continuous rate. As in the perpetual case, a regularity property is sufficient for smooth fit to occur. We also derive conditions on the Lévy measure under which smooth fit fails.