We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We examine the main properties of the Markov chain Xt = T(Xt– 1) + σ(Xt– 1)ε t. Under general and tractable assumptions, we derive bounds for the tails of the stationary density of the process {Xt} in terms of the common density of the ε t's.
Let X1, X2, · ··, Xn be a sequence of n random variables taking values in the ξ -letter alphabet . We consider the number N = N(n, k) of non-overlapping occurrences of a fixed k-letter word under (a) i.i.d. and (b) stationary Markovian hypotheses on the sequence , and use the Stein–Chen method to obtain Poisson approximations for the same. In each case, results and couplings from Barbour et al. (1992) are used to show that the total variation distance between the distribution of N and that of an appropriate Poisson random variable is of order (roughly) O(kS(k)), where S(k) denotes the stationary probability of the word in question. These results vastly improve on the approximations obtained in Godbole (1991). In the Markov case, we also make use of recently obtained eigenvalue bounds on convergence to stationarity due to Diaconis and Stroock (1991) and Fill (1991).
Faddy (1990) has conjectured that the variability of a pure birth process is increased, relative to the linear case, if the birth rates are convex and decreased if they are concave. We prove the conjecture by relating variability to the correlation structure of certain more informative versions of the process. A correlation inequality due to Harris (1977) is used to derive the necessary positive and negative correlation results.
Let be the Brownian motion process starting at the origin, its primitive and Ut = (Xt+x + ty, Bt + y), , the associated bidimensional process starting from a point . In this paper we present an elementary procedure for re-deriving the formula of Lefebvre (1989) giving the Laplace–Fourier transform of the distribution of the couple (σ α, Uσa), as well as Lachal's (1991) formulae giving the explicit Laplace–Fourier transform of the law of the couple (σ ab, Uσab), where σ α and σ ab denote respectively the first hitting time of from the right and the first hitting time of the double-sided barrier by the process . This method, which unifies and considerably simplifies the proofs of these results, is in fact a ‘vectorial' extension of the classical technique of Darling and Siegert (1953). It rests on an essential observation (Lachal (1992)) of the Markovian character of the bidimensional process .
Using the same procedure, we subsequently determine the Laplace–Fourier transform of the conjoint law of the quadruplet (σ α, Uσa, σb, Uσb).
Ball and Donnelly (1987) announced a result giving circumstances in which there is positive or negative correlation between the death times in a non-linear, Markovian death process. A proof is provided here, based on results concerning the distribution of optional random variables in terms of their conditional intensities.
The Type I and Type II counter models of Pyke (1958) have many applications in applied probability: in reliability, queueing and inventory models, for example. In this paper, we study the case in which the interarrival time distribution is of phase type. For the two counter models, we derive the renewal functions of the related renewal processes and propose approaches for their computations.
This paper considers several models for biological processes in which animate individuals live and die as members of groups which can split to form smaller groups. Resulting distributions of individuals over groups are compared and contrasted. In particular, two qualitatively different types of distributions are identified. It is clear that distinguishing between models giving rise to the same distribution types is difficult. Implications for more complex models are discussed and avenues for further research are outlined.
Likelihood ratios are used in computer simulation to estimate expectations with respect to one law from simulation of another. This importance sampling technique can be implemented with either the likelihood ratio at the end of the simulated time horizon or with a sequence of likelihood ratios at intermediate times. Since a likelihood ratio process is a martingale, the intermediate values are conditional expectations of the final value and their use introduces no bias.
We provide conditions under which using conditional expectations in this way brings guaranteed variance reduction. We use stochastic orderings to get positive dependence between a process and its likelihood ratio, from which variance reduction follows. Our analysis supports the following rough statement: for increasing functionals of associated processes with monotone likelihood ratio, conditioning helps. Examples are drawn from recursively defined processes, Markov chains in discrete and continuous time, and processes with Poisson input.
Given a parametric family of regenerative processes on a common probability space, we investigate when the derivatives (with respect to the parameter) are regenerative. We primarily consider sequences satisfying explicit, Lipschitz recursions, such as the waiting times in many queueing systems, and show that derivatives regenerate together with the original sequence under reasonable monotonicity or continuity assumptions. The inputs to our recursions are i.i.d. or, more generally, governed by a Harris-ergodic Markov chain. For i.i.d. input we identify explicit regeneration points; otherwise, we use coupling arguments. We give conditions for the expected steady-state derivative to be the derivative of the steady-state mean of the original sequence. Under these conditions, the derivative of the steady-state mean has a cycle-formula representation.
In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992).
In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.
We prove Lévy's theorem concerning positiveness of transition probabilities of Markov processes when the state space is countable and an invariant probability distribution exists. Our approach relies on the representation of transition probabilities in terms of the directed circuits that occur along the sample paths.
A new approach to the problem of classification of (deflected) random walks in or Markovian models for queueing networks with identical customers is introduced. It is based on the analysis of the intrinsic dynamical system associated with the random walk. Earlier results for small dimensions are presented from this novel point of view. We give proofs of new results for higher dimensions related to the existence of a continuous invariant measure for the underlying dynamical system. Two constants are shown to be important: the free energy M < 0 corresponds to ergodicity, the Lyapounov exponent L < 0 defines recurrence. General conjectures, examples, unsolved problems and surprising connections with ergodic theory, classical dynamical systems and their random perturbations are largely presented. A useful notion naturally arises, the so-called scaled random perturbation of a dynamical system.
This paper considers a model for the spread of an epidemic in a closed, homogeneously mixing population in which new infections occur at rate βxy/(x + y), where x and y are the numbers of susceptible and infectious individuals, respectively, and β is an infection parameter. This contrasts with the standard general epidemic in which new infections occur at rate βxy. Both the deterministic and stochastic versions of the modified epidemic are analysed. The deterministic model is completely soluble. The time-dependent solution of the stochastic model is derived and the total size distribution is considered. Threshold theorems, analogous to those of Whittle (1955) and Williams (1971) for the general stochastic epidemic, are proved for the stochastic model. Comparisons are made between the modified and general epidemics. The effect of introducing variability in susceptibility into the modified epidemic is studied.
A direct derivation is given of a formula for the normalized asymptotic variance parameters of the boundary local times of reflected Brownian motion (with drift) on a compact interval. This formula was previously obtained by Berger and Whitt using an M/M/1/C queue approximation to the reflected Brownian motion. The bivariate Laplace transform of the hitting time of a level and the boundary local time up to that hitting time, for a one-dimensional reflected Brownian motion with drift, is obtained as part of the derivation.
We study a class of simulated annealing algorithms for global minimization of a continuous function defined on a subset of We consider the case where the selection Markov kernel is absolutely continuous and has a density which is uniformly bounded away from 0. This class includes certain simulated annealing algorithms recently introduced by various authors. We show that, under mild conditions, the sequence of states generated by these algorithms converges in probability to the global minimum of the function. Unlike most previous studies where the cooling schedule is deterministic, our cooling schedule is allowed to be adaptive. We also address the issue of almost sure convergence versus convergence in probability.
Stability in population size is illusory: populations left to themselves either grow beyond all bounds or die out. But if they do not die out their composition stabilizes. These problems are discussed in terms of general abstract, multitype branching processes. The life and descent of a typical individual is described.
In this paper, we consider kth-order two-state Markov chains {Xi} with stationary transition probabilities. For k = 1, we construct in detail an upper bound for the total variation d(Sn, Y) = Σx |𝐏(Sn = x) − 𝐏(Y = x)|, where Sn = X1 + · ··+ Xn and Y is a compound Poisson random variable. We also show that, under certain conditions, d(Sn, Y) converges to 0 as n tends to ∞. For k = 2, the corresponding results are given without derivation. For general k ≧ 3, a conjecture is proposed.
It is well known that most commonly used discrete distributions fail to belong to the domain of maximal attraction for any extreme value distribution. Despite this negative finding, C. W. Anderson showed that for a class of discrete distributions including the negative binomial class, it is possible to asymptotically bound the distribution of the maximum. In this paper we extend Anderson's result to discrete-valued processes satisfying the usual mixing conditions for extreme value results for dependent stationary processes. We apply our result to obtain bounds for the distribution of the maximum based on negative binomial autoregressive processes introduced by E. McKenzie and Al-Osh and Alzaid. A simulation study illustrates the bounds for small sample sizes.
This paper deals with a class of discrete-time Markov chains for which the invariant measures can be expressed in terms of generalized continued fractions. The representation covers a wide class of stochastic models and is well suited for numerical applications. The results obtained can easily be extended to continuous-time Markov chains.
We study conditions for the existence of non-trivial quasi-stationary distributions for the birth-and-death chain with 0 as absorbing state. We reduce our problem to a continued fractions one that can be solved by using extensions of classical results of this theory. We also prove that there exist normalized quasi-stationary distributions if and only if 0 is geometrically absorbing.