To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a notion of kth order stochastic monotonicity and duality that allows us to unify the notion used in insurance mathematics (sometimes refereed to as Siegmund's duality) for the study of ruin probability and the duality responsible for the so-called put-call symmetries in option pricing. Our general kth order duality can be interpreted financially as put-call symmetry for powered options. The main objective of this paper is to develop an effective analytic approach to the analysis of duality that will lead to the full characterization of kth order duality of Markov processes in terms of their generators, which is new even for the well-studied case of put-call symmetries.
We consider a queueing loss system with heterogeneous skill based servers with arbitrary service distributions. We assume Poisson arrivals, with each arrival having a vector indicating which of the servers are eligible to serve it. An arrival can only be assigned to a server that is both idle and eligible. Assuming exchangeable eligibility vectors and an idle time ordering assignment policy, the limiting distribution of the system is derived. It is shown that the limiting probabilities of the set of idle servers depend on the service time distributions only through their means. Moreover, conditional on the set of idle servers, the remaining service times of the busy servers are independent and have their respective equilibrium service distributions.
The eigentime identity for one-dimensional diffusion processes on the halfline with an entrance boundary at ∞ is obtained by using the trace of the deviation kernel. For the case of an exit boundary at ∞, a similar eigentime identity is presented with the aid of the Green function. Explicit equivalent statements are also listed in terms of the strong ergodicity or the uniform decay for diffusion processes.
We study the decay parameter (the rate of convergence of the transition probabilities) of a birth-death process on {0, 1, …}, which we allow to evanesce by escape, via state 0, to an absorbing state -1. Our main results are representations for the decay parameter under four different scenarios, derived from a unified perspective involving the orthogonal polynomials appearing in Karlin and McGregor's representation for the transition probabilities of a birth-death process, and the Courant-Fischer theorem on eigenvalues of a symmetric matrix. We also show how the representations readily yield some upper and lower bounds that have appeared in the literature.
Drawdowns measuring the decline in value from the historical running maxima over a given period of time are considered as extremal events from the standpoint of risk management. To date, research on the topic has mainly focused on the side of severity by studying the first drawdown over a certain prespecified size. In this paper we extend the discussion by investigating the frequency of drawdowns and some of their inherent characteristics. We consider two types of drawdown time sequences depending on whether a historical running maximum is reset or not. For each type we study the frequency rate of drawdowns, the Laplace transform of the nth drawdown time, the distribution of the running maximum, and the value process at the nth drawdown time, as well as some other quantities of interest. Interesting relationships between these two drawdown time sequences are also established. Finally, insurance policies protecting against the risk of frequent drawdowns are also proposed and priced.
Λ-coalescents model the evolution of a coalescing system in which any number of components randomly sampled from the whole may merge into larger blocks. This survey focuses on related combinatorial constructions and the large-sample behaviour of the functionals which characterize in some way the speed of coalescence.
The asymptotic behaviour of many locally branching epidemic models can, at least to first order, be deduced from the limit theory of two branching processes. The first is Whittle's (1955) branching approximation to the early stages of the epidemic, the phase in which approximately exponential growth takes place. The second is the susceptibility approximation; the backward branching process that approximates the history of the contacts that would lead to an individual becoming infected. The simplest coupling arguments for demonstrating the closeness of these branching process approximations do not keep the processes identical for quite long enough. Thus, arguments showing that the differences are unimportant are also needed. In this paper we show that, for some models, couplings can be constructed that are sufficiently accurate for this extra step to be dispensed with.
Consider two independent Goldstein-Kac telegraph processes X1(t) and X2(t) on the real line ℝ. The processes Xk(t), k = 1, 2, describe stochastic motions at finite constant velocities c1 > 0 and c2 > 0 that start at the initial time instant t = 0 from the origin of ℝ and are controlled by two independent homogeneous Poisson processes of rates λ1 > 0 and λ2 > 0, respectively. We obtain a closed-form expression for the probability distribution function of the Euclidean distance ρ(t) = |X1(t) - X2(t)|, t > 0, between these processes at an arbitrary time instant t > 0. Some numerical results are also presented.
Let 𝓈 be a finite or countable set. Given a matrix F = (Fij)i,j∈𝓈 of distribution functions on R and a quasistochastic matrix Q = (qij)i,j∈𝓈, i.e. an irreducible nonnegative matrix with maximal eigenvalue 1 and associated unique (modulo scaling) positive left and right eigenvectors u and v, the matrix renewal measure ∑n≥0Qn ⊗ F*n associated with Q ⊗ F := (qijFij)i,j∈𝓈 (see below for precise definitions) and a related Markov renewal equation are studied. This was done earlier by de Saporta (2003) and Sgibnev (2006, 2010) by drawing on potential theory, matrix-analytic methods, and Wiener-Hopf techniques. In this paper we describe a probabilistic approach which is quite different and starts from the observation that Q ⊗ F becomes an ordinary semi-Markov matrix after a harmonic transform. This allows us to relate Q ⊗ F to a Markov random walk {(Mn, Sn)}n≥0 with discrete recurrent driving chain {Mn}n≥0. It is then shown that renewal theorems including a Choquet-Deny-type lemma may be easily established by resorting to standard renewal theory for ordinary random walks. The paper concludes with two typical examples.
Consider a one-sided Markov additive process with an upper and a lower barrier, where each can be either reflecting or terminating. For both defective and nondefective processes, and all possible scenarios, we identify the corresponding potential measures, which help to generalize a number of results for one-sided Lévy processes. The resulting rather neat formulae have various applications in risk and queueing theories, and, in particular, they lead to quasistationary distributions of the corresponding processes.
In this paper we study a weak law of large numbers for the total internal length of the Bolthausen-Sznitman coalescent, thereby obtaining the weak limit law of the centered and rescaled total external length; this extends results obtained in Dhersin and Möhle (2013). An application to population genetics dealing with the total number of mutations in the genealogical tree is also given.
A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.
The large deviation principle in the small noise limit is derived for solutions of possibly degenerate Itô stochastic differential equations with predictable coefficients, which may also depend on the large deviation parameter. The result is established under mild assumptions using the Dupuis-Ellis weak convergence approach. Applications to certain systems with memory and to positive diffusions with square-root-like dispersion coefficient are included.
Let {Xn}n∈ℕ be a Markov chain on a measurable space with transition kernel P, and let The Markov kernel P is here considered as a linear bounded operator on the weighted-supremum space associated with V. Then the combination of quasicompactness arguments with precise analysis of eigenelements of P allows us to estimate the geometric rate of convergence ρV(P) of {Xn}n∈ℕ to its invariant probability measure in operator norm on A general procedure to compute ρV(P) for discrete Markov random walks with identically distributed bounded increments is specified.
Looking at a large branching population we determine along which path the population that started at 1 at time 0 ended up in B at time N. The result describes the density process, that is, population numbers divided by the initial number K (where K is assumed to be large). The model considered is that of a Galton-Watson process. It is found that in some cases population paths exhibit the strange feature that population numbers go down and then increase. This phenomenon requires further investigation. The technique uses large deviations, and the rate function based on Cramer's theorem is given. It also involves analysis of existence of solutions of a certain algebraic equation.
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
Simple random walks on a partially directed version of Z2 are considered. More precisely, vertical edges between neighbouring vertices of Z2 can be traversed in both directions (they are undirected) while horizontal edges are one-way. The horizontal orientation is prescribed by a random perturbation of a periodic function; the perturbation probability decays according to a power law in the absolute value of the ordinate. We study the type of simple random walk that is recurrent or transient, and show that there exists a critical value of the decay power, above which it is almost surely recurrent and below which it is almost surely transient.
One of the standard methods for approximating a bivariate continuous-time Markov chain {X(t), Y(t): t ≥ 0}, which proves too difficult to solve in its original form, is to replace one of its variables by its mean, This leads to a simplified stochastic process for the remaining variable which can usually be solved, although the technique is not always optimal. In this note we consider two cases where the method is successful for carrier infections and mutating bacteria, and one case where it is somewhat less so for the SIS epidemics.
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.