To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we consider the optimal control of an infinite dam using policies assuming that the input process is a compound Poisson process with a non-negative drift term, and using the total discounted cost and long-run average cost criteria. The results of Lee and Ahn (1998) as well as other well-known results are shown to follow from our results.
The transition functions for the correlated random walk with two absorbing boundaries are derived by means of a combinatorial construction which is based on Krattenthaler's theorem for counting lattice paths with turns. Results for walks with one boundary and for unrestricted walks are presented as special cases. Finally we give an asymptotic formula, which proves to be useful for computational purposes.
Let (Xt) be a one-dimensional Ornstein-Uhlenbeck process with initial density function f : ℝ+ → ℝ+, which is a regularly varying function with exponent -(1 + η), η ∊ (0,1). We prove the existence of a probability measure ν with a Lebesgue density, depending on η, such that for every A ∊ B(R+):
The classical technique of uniformization (or randomization) for bounded continuous-time Markov chains and Markov reward structures is extended to dynamic systems generated by arbitrary non-negative generators. Most notably, these include so-called input-output models in economic analysis. The results are of practical interest for both computational and theoretical purposes. Particularly, the recursive computation and the limiting behaviour of cumulative reward structures for non-negative dynamic systems is concluded as a special application. Two numerical examples are included to illustrate the conditions and the results.
We define a class of anticipative flows on Poisson space and compute its Radon-Nikodym derivative. This result is applied to statistical testing in an anticipative queueing problem.
We consider Markov processes of DNA sequence evolution in which the instantaneous rates of substitution at a site are allowed to depend upon the states at the sites in a neighbourhood of the site at the instant of the substitution. We characterize the class of Markov process models of DNA sequence evolution for which the stationary distribution is a Gibbs measure, and give a procedure for calculating the normalizing constant of the measure. We develop an MCMC method for estimating the transition probability between sequences under models of this type. Finally, we analyse an alignment of two HIV-1 gene sequences using the developed theory and methodology.
In a recent paper [4] it was shown that, for an absorbing Markov chain whereabsorption is not guaranteed, the state probabilities at time t conditional on non-absorption by t generally depend on t. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there are no circumstances under which a quasistationary distribution can admit a stationary conditional interpretation.
Stress release processes are special Markov models attempting to describe the behaviour of stress and occurrence of earthquakes in seismic zones. The stress is built up linearly by tectonic forces and released spontaneously when earthquakes occur. Assuming that the risk is an exponential function of the stress, we derive closed form expressions for the stationary distribution of such processes, the moments of the risk, and the autocovariance function of the reciprocal risk process.
We investigate the stability problem for a nonlinear autoregressive model with Markov switching. First we give conditions for the existence and the uniqueness of a stationary ergodic solution. The existence of moments of such a solution is then examined and we establish a strong law of large numbers for a wide class of unbounded functions, as well as a central limit theorem under an irreducibility condition.
We study discrete-time population models where the nearest future of an individual may depend on the individual's life-stage (age and reproduction history) and the current population size. A criterion is given for whether there is a positive probability that the population survives forever. We identify the cases when population size grows exponentially and linearly and show that in the latter population size scaled by time is asymptotically Γ-distributed.
The paper concerns the asymptotic distributions of cluster functionals of extreme events in a dth-order stationary Markov chain {Xn, n = 1,2,…} for which the joint distribution of (X1,…,Xd+1) is absolutely continuous. Under some distributional assumptions for {Xn}, we establish weak convergence for a class of cluster functionals and obtain representations for the asymptotic distributions which are well suited for simulation. A number of examples important in applications are presented to demonstrate the usefulness of the results.
Consider the Delaunay graph and the Voronoi tessellation constructed with respect to a Poisson point process. The sequence of nuclei of the Voronoi cells that are crossed by a line defines a path on the Delaunay graph. We show that the evolution of this path is governed by a Markov chain. We study the ergodic properties of the chain and find its stationary distribution. As a corollary, we obtain the ratio of the mean path length to the Euclidean distance between the end points, and hence a bound for the mean asymptotic length of the shortest path.
We apply these results to define a family of simple incremental algorithms for constructing short paths on the Delaunay graph and discuss potential applications to routeing in mobile communication networks.
We study stochastic dynamic investment games in continuous time between two investors (players) who have available two different, but possibly correlated, investment opportunities. There is a single payoff function which depends on both investors’ wealth processes. One player chooses a dynamic portfolio strategy in order to maximize this expected payoff, while his opponent is simultaneously choosing a dynamic portfolio strategy so as to minimize the same quantity. This leads to a stochastic differential game with controlled drift and variance. For the most part, we consider games with payoffs that depend on the achievement of relative performance goals and/or shortfalls. We provide conditions under which a game with a general payoff function has an achievable value, and give an explicit representation for the value and resulting equilibrium portfolio strategies in that case. It is shown that non-perfect correlation is required to rule out trivial solutions. We then use this general result explicitly to solve a variety of specific games. For example, we solve a probability maximizing game, where each investor is trying to maximize the probability of beating the other's return by a given predetermined percentage. We also consider objectives related to the minimization or maximization of the expected time until one investor's return beats the other investor's return by a given percentage. Our results allow a new interpretation of the market price of risk in a Black-Scholes world. Games with discounting are also discussed, as are games of fixed duration related to utility maximization.
This work presents an estimate of the error on a cumulative reward function until the entrance time of a continuous-time Markov chain into a set, when the infinitesimal generator of this chain is perturbed. The derivation of an error bound constitutes the first part of the paper while the second part deals with an application where the time until saturation is considered for a circuit switched network which starts from an empty state and which is also subject to possible failures.
Consider a branching random walk in which each particle has a random number (one or more) of offspring particles that are displaced independently of each other according to a logconcave density. Under mild additional assumptions, we obtain the following results: the minimal position in the nth generation, adjusted by its α-quantile, converges weakly to a non-degenerate limiting distribution. There also exists a ‘conditional limit’ of the adjusted minimal position, which has a (Gumbel) extreme value distribution delayed by a random time-lag. Consequently, the unconditional limiting distribution is a mixture of extreme value distributions.
In 1991 Perkins [7] showed that the normalized critical binary branching process is a time inhomogeneous Fleming-Viot process. In the present paper we extend this result to jump-type branching processes and we show that the normalized jump-type branching processes are in a new class of probability measure-valued processes which will be called ‘jump-type Fleming-Viot processes’. Furthermore we also show that by using these processes it is possible to introduce another new class of measure-valued processes which are obtained by the combination of jump-type branching processes and Fleming-Viot processes.
In this paper we extend the notion of quasi-reversibility and apply it to the study of queueing networks with instantaneous movements and signals. The signals treated here are considerably more general than those in the existing literature. The approach not only provides a unified view for queueing networks with tractable stationary distributions, it also enables us to find several new classes of product form queueing networks, including networks with positive and negative signals that instantly add or remove customers from a sequence of nodes, networks with batch arrivals, batch services and assembly-transfer features, and models with concurrent batch additions and batch deletions along a fixed or a random route of the network.
A simple asymmetric random walk on the integers is stopped when its range is of a given length. When and where is it stopped? Analogous questions can be stated for a Brownian motion. Such problems are studied using results for the classical ruin problem, yielding results for the cover time and the range, both for asymmetric random walks and Brownian motion with drift.
We prove a new heavy traffic limit result for a simple queueing network under a ‘join the shorter queue’ policy, with the amount of traffic which has a routeing choice tending to zero as heavy traffic is approached. In this limit, the system considered does not exhibit state space collapse as in previous work by Foschini and Salz, and Reiman, but there is nevertheless some resource pooling gain over a policy of random routeing.
The estimation of critical values is one of the most interesting problems in the study of interacting particle systems. The bounds obtained analytically are not usually very tight and, therefore, computer simulation has been proved to be very useful in the estimation of these values. In this paper we present a new method for the estimation of critical values in any interacting particle system with an absorbing state. The method, based on the asymptotic behaviour of the absorption time of the process, is very easy to implement and provides good estimates. It can also be applied to processes different from particle systems.