To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Explicit formulas are found for the payoff and the optimal stopping strategy of the optimal stopping problem supτE (max0≤t≤τXt − c τ), where X = (Xt)t≥0 is geometric Brownian motion with drift μ and volatility σ > 0, and the supremum is taken over all stopping times for X. The payoff is shown to be finite, if and only if μ < 0. The optimal stopping time is given by τ* = inf {t > 0 | Xt = g* (max0≤t≤sXs)} where s ↦ g*(s) is the maximal solution of the (nonlinear) differential equationunder the condition 0 < g(s) < s, where Δ = 1 − 2μ / σ2 and K = Δ σ2 / 2c. The estimate is established g*(s) ∼ ((Δ − 1) / K Δ)1 / Δs1−1/Δ as s → ∞. Applying these results we prove the following maximal inequality:where τ may be any stopping time for X. This extends the well-known identity E (supt>0Xt) = 1 − (σ 2 / 2 μ) and is shown to be sharp. The method of proof relies upon a smooth pasting guess (for the Stephan problem with moving boundary) and the Itô–Tanaka formula (being applied two-dimensionally). The key point and main novelty in our approach is the maximality principle for the moving boundary (the optimal stopping boundary is the maximal solution of the differential equation obtained by a smooth pasting guess). We think that this principle is by itself of theoretical and practical interest.
Olivier and Walrand (1994) claimed that the departure process of an MMPP/M/1 queue is not an MAP unless the queue is a stationary M/M/1 queue. They also conjectured that the departure process of an MAP/PH/1 queue is not an MAP unless the queue is a stationary M/M/1 queue. We show that their proof of the first result has an algebraic error, which leaves open the above question of whether the departure process of an MMPP/M/1 can be an MAP.
The paper considers stability and instability properties of the Markov chain generated by the composition of an i.i.d. sequence of random transformations. The transformations are assumed to be either linear mappings or else mappings which can be well approximated near 0 by linear mappings. The main results concern the risk probabilities that the Markov chain enters or exits certain balls centered at 0. An application is given to the probability of extinction in a model from population dynamics.
This paper considers a branching process generated by an offspring distribution F with mean m < ∞ and variance σ2 < ∞ and such that, at each generation n, there is an observed δ-migration, according to a binomial law Bpvn*Nnbef which depends on the total population size Nnbef. The δ-migration is defined as an emigration, an immigration or a null migration, depending on the value of δ, which is assumed constant throughout the different generations. The process with δ-migration is a generation-dependent Galton-Watson process, whereas the observed process is not in general a martingale. Under the assumption that the process with δ-migration is supercritical, we generalize for the observed migrating process the results relative to the Galton-Watson supercritical case that concern the asymptotic behaviour of the process and the estimation of m and σ2, as n → ∞. Moreover, an asymptotic confidence interval of the initial population size is given.
In this short communication, some of the recent results of Liu (1998) and Biggins and Kyprianou (1997), concerning solutions to a certain functional equation associated with the branching random walk, are strengthened. Their importance is emphasized in the context of travelling wave solutions to a discrete version of the KPP equation and the connection with the behaviour of the rightmost particle in the nth generation.
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
We study a point process model with stochastic intensities for a particular branching population of individuals of two types. Type-I individuals immigrate into the population at the times of a Poisson process. During their lives they generate type-II individuals according to a random age dependent birth rate, which themselves may multiply and die. Living type-II descendants increase the death intensity of their type-I ancestor, and conversely, the multiplication and dying intensities of type-II individuals may depend on the life situation of their type-I ancestor. We show that the probability generating function of the marginal distribution of a type-I individual's life process, conditioned on its individual infection and death risk, satisfies an initial value problem of a partial differential equation, and derive its solution. This allows for the determination of additional distributions of observable random variables as well as for describing the complete population process.
We derive formulas for the first- and higher-order derivatives of the steady state performance measures for changes in transition matrices of irreducible and aperiodic Markov chains. Using these formulas, we obtain a Maclaurin series for the performance measures of such Markov chains. The convergence range of the Maclaurin series can be determined. We show that the derivatives and the coefficients of the Maclaurin series can be easily estimated by analysing a single sample path of the Markov chain. Algorithms for estimating these quantities are provided. Markov chains consisting of transient states and multiple chains are also studied. The results can be easily extended to Markov processes. The derivation of the results is closely related to some fundamental concepts, such as group inverse, potentials, and realization factors in perturbation analysis. Simulation results are provided to illustrate the accuracy of the single sample path based estimation. Possible applications to engineering problems are discussed.
The immigration processes associated with a given branching particle system are formulated by skew convolution semigroups. It is shown that every skew convolution semigroup corresponds uniquely to a locally integrable entrance law for the branching particle system. The immigration particle system may be constructed using a Poisson random measure based on a Markovian measure determined by the entrance law. In the special case where the underlying process is a minimal Brownian motion in a bounded domain, a general representation is given for locally integrable entrance laws for the branching particle system. The convergence of immigration particle systems to measure-valued immigration processes is also studied.
Let Mn denote the size of the largest amongst the first n generations of a simple branching process. It is shown for near critical processes with a finite offspring variance that the law of Mn/n, conditioned on no extinction at or before n, has a non-defective weak limit. The proof uses a conditioned functional limit theorem deriving from the Feller-Lindvall (CB) diffusion limit for branching processes descended from increasing numbers of ancestors. Subsidiary results are given about hitting time laws for CB diffusions and Bessel processes.
Previous work in extending Wald's equations to Markov random walks involves finiteness of moment generating functions and uniform recurrence assumptions. By using a new approach, we can remove these assumptions. The results are applied to establish finiteness of moments of ladder variables and to derive asymptotic expansions for expected first passage times of Markov random walks. Wiener–Hopf factorizations for Markov random walks are also applied to analyse ladder variables and related first passage problems.
We introduce a new class of interacting particle systems on a graph G. Suppose initially there are Ni(0) particles at each vertex i of G, and that the particles interact to form a Markov chain: at each instant two particles are chosen at random, and if these are at adjacent vertices of G, one particle jumps to the other particle's vertex, each with probability 1/2. The process N enters a death state after a finite time when all the particles are in some independent subset of the vertices of G, i.e. a set of vertices with no edges between any two of them. The problem is to find the distribution of the death state, ηi = Ni(∞), as a function of Ni(0).
We are able to obtain, for some special graphs, the limiting distribution of Ni if the total number of particles N → ∞ in such a way that the fraction, Ni(0)/S = ξi, at each vertex is held fixed as N → ∞. In particular we can obtain the limit law for the graph S2, the two-leaf star which has three vertices and two edges.
A spectrum of self-organizing rules including the move-to-front rule and the transposition rule are applied to the communication problem. The stationary distributions under these rules are obtained. Cost comparison between them is considered. In the special case of three paths, it is shown that the transposition rule always outperforms the move-to-front rule.
The time until extinction for the closed SIS stochastic logistic epidemic model is investigated. We derive the asymptotic distribution for the extinction time as the population grows to infinity, under different initial conditions and for different values of the infection rate.
In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.
A class of Markov processes in continuous time, with local transition rules, acting on colourings of a lattice, is defined. An algorithm is described for dynamic simulation of such processes. The computation time for the next state is O(logb), where b is the number of possible next states. This technique is used to give some evidence that the limiting shape for a random growth process in the plane with exponential distribution is approximately a circle.
Generalizing the classical Banach matchbox problem, we consider the process of removing two types of ‘items’ from a ‘pile’ with selection probabilities for the type of the next item to be removed depending on the current numbers of remaining items, and thus changing sequentially. Under various conditions on the probability pn1,n2 that the next removal will take away an item of type I, given that n1 and n2 are the current numbers of items of the two types, we derive asymptotic formulas (as the initial pile size tends to infinity) for the probability that the items of type I are completely removed first and for the number of items left. In some special cases we also obtain explicit results.
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
The xlogx condition is a fundamental criterion for the rate of growth of a general branching process, being equivalent to non-degeneracy of the limiting random variable. In this paper we adopt the ideas from Lyons, Pemantle and Peres (1995) to present a new proof of this well-known theorem. The idea is to compare the ordinary branching measure on the space of population trees with another measure, the size-biased measure.
Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n)P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n)P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.