To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we consider the random-scan symmetric random walk Metropolis algorithm (RSM) on ℝd. This algorithm performs a Metropolis step on just one coordinate at a time (as opposed to the full-dimensional symmetric random walk Metropolis algorithm, which proposes a transition on all coordinates at once). We present various sufficient conditions implying V-uniform ergodicity of the RSM when the target density decreases either subexponentially or exponentially in the tails.
A new theorem on the existence of an invariant initial distribution for a Markov chain evolving on a Polish space is proved. As an application of the theorem, sufficient conditions for the existence of integrated ARCH processes are established. In the case where these conditions are violated, the top Lyapunov exponent is shown to be zero.
In this paper, we investigate sooner and later waiting time problems for patterns S0 and S1 in multistate Markov dependent trials. The probability functions and the probability generating functions of the sooner and later waiting time random variables are studied. Further, the probability generating functions of the distributions of distances between successive occurrences of S0 and between successive occurrences of S0 and S1 and of the waiting time until the rth occurrence of S0 are also given.
In this paper, the class of controlled branching processes with random control functions introduced by Yanev (1976) is considered. For this class, necessary and sufficient conditions are established for the process to become extinct with probability 1 and the limit probabilistic behaviour of the population size, suitably normed, is investigated.
Let X be a one-dimensional strong Markov process with continuous sample paths. Using Volterra-Stieltjes integral equation techniques we investigateHölder continuity and differentiability of first passage time distributions of X with respect to continuous lower and upper moving boundaries. Under mild assumptions on the transition function of Xwe prove the existence of a continuous first passage time density to one-sided differentiable moving boundaries and derive a new integral equation for this density. We apply our results to Brownian motion and its nonrandom Markovian transforms, in particular to the Ornstein-Uhlenbeck process.
Motivated by the question of the age in a branching population we try to recreate the past by looking back from the currently observed population size. We define a new backward Galton-Watson process and study the case of the geometric offspring distribution with parameter p in detail. The backward process is then the Galton-Watson process with immigration, again with a geometric offspring distribution but with parameter 1-p, and it is also the dual to the original Galton-Watson process. We give the asymptotic distribution of the age when the initial population size is large in supercritical and critical cases. To this end, we give new asymptotic results on the Galton-Watson immigration processes stopped at zero.
This paper concentrates on investigating ergodicity and stability for generalised Markov branching processes with resurrection. Easy checking criteria including several clear-cut corollaries are established for ordinary and strong ergodicity of such processes. The equilibrium distribution is given in an elegant closed form for the ergodic case. The probabilistic interpretation of the results is clear and thus explained.
Galton-Watson forests consisting of N roots (or trees) and n nonroot vertices are studied. The limit distributions of the number of leaves in such a forest are obtained. By a leaf we mean a vertex from which there are no arcs emanating.
In this paper, we study the first instant when Brownian motion either spends consecutively more than a certain time above a certain level, or reaches another level. This stopping time generalizes the ‘Parisian’ stopping times that were introduced by Chesney et al. (1997). Using excursion theory, we derive the Laplace transform of this stopping time. We apply this result to the valuation of investment projects with a delay constraint, but with an alternative: pay a higher cost and get the project started immediately
Moderate deviation principles are established in dimensions d ≥ 3 for super-Brownian motion with random immigration, where the immigration rate is governed by the trajectory of another super-Brownian motion. It fills in the gap between the central limit theorem and large deviation principles for this model which were obtained by Hong and Li (1999) and Hong (2001).
In this paper we introduce the notion of structured phase-type representations. A structured representation corresponds to a class of phase-type representations having the same graph but free parameters. We define a generic property for such a representation. Then we prove that, generically, an irreducible structured representation of a given order corresponds to a distribution whose degree is equal to this order.
In this paper, we apply coupling methods to study strong ergodicity for Markov processes, and sufficient conditions are presented in terms of the expectations of coupling times. In particular, explicit criteria are obtained for one-dimensional diffusions and birth-death processes to be strongly ergodic. As a by-product, strong ergodicity implies that the essential spectra of the generators for these processes are empty.
Motivated by various applications in queueing systems, this work is devoted to continuous-time Markov chains with countable state spaces that involve both fast-time scale and slow-time scale with the aim of approximating the time-varying queueing systems by their quasistationary counterparts. Under smoothness conditions on the generators, asymptotic expansions of probability vectors and transition probability matrices are constructed. Uniform error bounds are obtained, and then sequences of occupation measures and their functionals are examined. Mean square error estimates of a sequence of occupation measures are obtained; a scaled sequence of functionals of occupation measures is shown to converge to a Gaussian process with zero mean. The representation of the variance of the limit process is also explicitly given. The results obtained are then applied to treat Mt/Mt/1 queues and Markov-modulated fluid buffer models.
In this paper, we introduce a bisexual Galton-Watson branching process with mating function dependent on the population size in each generation. Necessary and sufficient conditions for the process to become extinct with probability 1 are investigated for two possible conditions on the sequence of mating functions. Some results for the probability generating functions associated with the process are also given.
In this paper, we consider the compound Poisson process that is perturbed by diffusion (CPD). We derive formulae for the Laplace transform, expectation and variance of total duration of negative surplus for the CPD and also present some examples of the CPD with an exponential individual claim amount distribution and a mixture exponential individual claim amount distribution.
Given a sequence S and a collection Ω of d words, it is of interest in many applications to characterize the multivariate distribution of the vector of counts U = (N(S,w1), …, N(S,wd)), where N(S,w) is the number of times a word w ∈ Ω appears in the sequence S. We obtain an explicit bound on the error made when approximating the multivariate distribution of U by the normal distribution, when the underlying sequence is i.i.d. or first-order stationary Markov over a finite alphabet. When the limiting covariance matrix of Uis nonsingular, the error bounds decay at rate O((log n) / √n) in the i.i.d. case and O((log n)3 / √n) in the Markov case. In order for U to have a nondegenerate covariance matrix, it is necessary and sufficient that the counted word set Ω is not full, that is, that Ω is not the collection of all possible words of some length k over the given finite alphabet. To supply the bounds on the error, we use a version of Stein's method.
We propose and study a random crystalline algorithm (a discrete approximation) of the Gauss curvature flow of smooth simple closed convex curves in ℝ2 as a stepping stone to the full understanding of such phenomena as the wearing process of stones on a beach.
We consider a fluid queue with downward jumps, where the fluid flow rate and the downward jumps are controlled by a background Markov chain with a finite state space. We show that the stationary distribution of a buffer content has a matrix exponential form, and identify the exponent matrix. We derive these results using time-reversed arguments and the background state distribution at the hitting time concerning the corresponding fluid flow with upward jumps. This distribution was recently studied for a fluid queue with upward jumps under a stability condition. We give an alternative proof for this result using the rate conservation law. This proof not only simplifies the proof, but also explains an underlying Markov structure and enables us to study more complex cases such that the fluid flow has jumps subject to a nondecreasing Lévy process, a Brownian component, and countably many background states.
A Markov chain model for a battle between two opposing forces is formulated, which is a stochastic version of one studied by F. W. Lanchester. Solutions of the backward equations for the final state yield martingales and stopping identities, but a more powerful technique is a time-reversal analogue of a known method for studying urn models. A general version of a remarkable result of Williams and McIlroy (1998) is proved.