To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many regenerative arguments in stochastic processes use random times which are akin to stopping times, but which are determined by the future as well as the past behaviour of the process of interest. Such arguments based on ‘conditioning on the future’ are usually developed in an ad-hoc way in the context of the application under consideration, thereby obscuring the underlying structure. In this paper we give a simple, unified, and more general treatment of such conditioning theory. We further give a number of novel applications to various particle system models, in particular to various flavours of contact processes and to infinite-bin models. We give a number of new results for existing and new models. We further make connections with the theory of Harris ergodicity.
We present two iterative methods for computing the global and partial extinction probability vectors for Galton-Watson processes with countably infinitely many types. The probabilistic interpretation of these methods involves truncated Galton-Watson processes with finite sets of types and modified progeny generating functions. In addition, we discuss the connection of the convergence norm of the mean progeny matrix with extinction criteria. Finally, we give a sufficient condition for a population to become extinct almost surely even though its population size explodes on the average, which is impossible in a branching process with finitely many types. We conclude with some numerical illustrations for our algorithmic methods.
We consider a stochastic SIR (susceptible → infective → removed) epidemic on a random graph with specified degree distribution, constructed using the configuration model, and investigate the ‘acquaintance vaccination’ method for targeting individuals of high degree for vaccination. Branching process approximations are developed which yield a post-vaccination threshold parameter, and the asymptotic (large population) probability and final size of a major outbreak. We find that introducing an imperfect vaccine response into the present model for acquaintance vaccination leads to sibling dependence in the approximating branching processes, which may then require infinite type spaces for their analysis and are generally not amenable to numerical calculation. Thus, we propose and analyse an alternative model for acquaintance vaccination, which avoids these difficulties. The theory is illustrated by a brief numerical study, which suggests that the two models for acquaintance vaccination yield quantitatively very similar disease properties.
The distributions of discrete, continuous and conditional multiple window scan statistics are studied. The finite Markov chain imbedding technique has been applied to obtain the distributions of fixed window scan statistics defined from a sequence of Bernoulli trials. In this manuscript the technique is extended to compute the distributions of multiple window scan statistics and the exact powers for multiple pulse and Markov dependent alternatives. An application in blood component quality monitoring is provided. Numerical results are also given to illustrate our theoretical results.
We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.
We consider a host-parasite model for a population of cells that can be of two types, A or B, and exhibits unilateral reproduction: while a B-cell always splits into two cells of the same type, the two daughter cells of an A-cell can be of any type. The random mechanism that describes how parasites within a cell multiply and are then shared into the daughter cells is allowed to depend on the hosting mother cell as well as its daughter cells. Focusing on the subpopulation of A-cells and its parasites, our model differs from the single-type model recently studied by Bansaye (2008) in that the sharing mechanism may be biased towards one of the two types. Our main results are concerned with the nonextinctive case and provide information on the behavior, as n → ∞, of the number of A-parasites in generation n and the relative proportion of A- and B-cells in this generation which host a given number of parasites. As in Bansaye (2008), proofs will make use of a so-called random cell line which, when conditioned to be of type A, behaves like a branching process in a random environment.
We introduce two stochastic chemostat models consisting of a coupled population-nutrient process reflecting the interaction between the nutrient and the bacteria in the chemostat with finite volume. The nutrient concentration evolves continuously but depends on the population size, while the population size is a birth-and-death process with coefficients depending on time through the nutrient concentration. The nutrient is shared by the bacteria and creates a regulation of the bacterial population size. The latter and the fluctuations due to the random births and deaths of individuals make the population go almost surely to extinction. Therefore, we are interested in the long-time behavior of the bacterial population conditioned to nonextinction. We prove the global existence of the process and its almost-sure extinction. The existence of quasistationary distributions is obtained based on a general fixed-point argument. Moreover, we prove the absolute continuity of the nutrient distribution when conditioned to a fixed number of individuals and the smoothness of the corresponding densities.
We extend many of the classical results for standard one-dimensional diffusions to a diffusion process with memory of the form d Xt=σ(Xt,Xt)dWt, where Xt= m ∧ inf0 ≤s≤tXs. In particular, we compute the expected time for X to leave an interval, classify the boundary behavior at 0, and derive a new occupation time formula for X. We also show that (Xt,Xt) admits a joint density, which can be characterized in terms of two independent tied-down Brownian meanders (or, equivalently, two independent Bessel-3 bridges). Finally, we show that the joint density satisfies a generalized forward Kolmogorov equation in a weak sense, and we derive a new forward equation for down-and-out call options.
We present a new algorithm to discretize a decoupled forward‒backward stochastic differential equation driven by a pure jump Lévy process (FBSDEL for short). The method consists of two steps. In the first step we approximate the FBSDEL by a forward‒backward stochastic differential equation driven by a Brownian motion and Poisson process (FBSDEBP for short), in which we replace the small jumps by a Brownian motion. Then, we prove the convergence of the approximation when the size of small jumps ε goes to 0. In the second step we obtain the Lp-Hölder continuity of the solution of the FBSDEBP and we construct two numerical schemes for this FBSDEBP. Based on the Lp-Hölder estimate, we prove the convergence of the scheme when the number of time steps n goes to ∞. Combining these two steps leads to the proof of the convergence of numerical schemes to the solution of FBSDEs driven by pure jump Lévy processes.
Continuing the work in Bertoin (2011) we study the distribution of the maximal number X*k of offspring amongst all individuals in a critical Galton‒Watson process started with k ancestors, treating the case when the reproduction law has a regularly varying tail F̅ with index −α for α > 2 (and, hence, finite variance). We show that X*k suitably normalized converges in distribution to a Fréchet law with shape parameter α/2; this contrasts sharply with the case 1< α<2 when the variance is infinite. More generally, we obtain a weak limit theorem for the offspring sequence ranked in decreasing order, in terms of atoms of a certain doubly stochastic Poisson measure.
Considering a random binary tree with n labelled leaves, we use a pruning procedure on this tree in order to construct a β(3/2,1/2)-coalescent process. We also use the continuous analogue of this construction, i.e. a pruning procedure on Aldous's continuum random tree, to construct a continuous state space process that has the same structure as the β-coalescent process up to some time change. These two constructions enable us to obtain results on the coalescent process, such as the asymptotics on the number of coalescent events or the law of the blocks involved in the last coalescent event.
In this work, we study discrete-time Markov decision processes (MDPs) with constraints when all the objectives have the same form of expected total cost over the infinite time horizon. Our objective is to analyze this problem by using the linear programming approach. Under some technical hypotheses, it is shown that if there exists an optimal solution for the associated linear program then there exists a randomized stationary policy which is optimal for the MDP, and that the optimal value of the linear program coincides with the optimal value of the constrained control problem. A second important result states that the set of randomized stationary policies provides a sufficient set for solving this MDP. It is important to note that, in contrast with the classical results of the literature, we do not assume the MDP to be transient or absorbing. More importantly, we do not impose the cost functions to be nonnegative or to be bounded below. Several examples are presented to illustrate our results.
In a discrete-time single-type Galton--Watson branching random walk {Zn, ζn}n≤ 0, where Zn is the population of the nth generation and ζn is a collection of the positions on ℝ of the Zn individuals in the nth generation, let Yn be the position of a randomly chosen individual from the nth generation and Zn(x) be the number of points in ζn that are less than or equal to x for x∈ℝ. In this paper we show in the explosive case (i.e. m=E(Z1∣ Z0=1)=∞) when the offspring distribution is in the domain of attraction of a stable law of order α,0 <α<1, that the sequence of random functions {Zn(x)/Zn:−∞<x<∞} converges in the finite-dimensional sense to {δx:−∞<x<∞}, where δx≡ 1{N≤ x} and N is an N(0,1) random variable.
Let $p$ be a real number greater than one and let $\Gamma $ be a graph of bounded degree. We investigate links between the $p$-harmonic boundary of $\Gamma $ and the ${D}_{p} $-massive subsets of $\Gamma $. In particular, if there are $n$ pairwise disjoint ${D}_{p} $-massive subsets of $\Gamma $, then the $p$-harmonic boundary of $\Gamma $ consists of at least $n$ elements. We show that the converse of this statement is also true.
We consider the bipartite matching model of customers and servers introduced by Caldentey, Kaplan and Weiss (2009). Customers and servers play symmetrical roles. There are finite sets C and S of customer and server classes, respectively. Time is discrete and at each time step one customer and one server arrive in the system according to a joint probability measure μ on C× S, independently of the past. Also, at each time step, pairs of matched customers and servers, if they exist, depart from the system. Authorized matchings are given by a fixed bipartite graph (C, S, E⊂ C × S). A matching policy is chosen, which decides how to match when there are several possibilities. Customers/servers that cannot be matched are stored in a buffer. The evolution of the model can be described by a discrete-time Markov chain. We study its stability under various admissible matching policies, including ML (match the longest), MS (match the shortest), FIFO (match the oldest), RANDOM (match uniformly), and PRIORITY. There exist natural necessary conditions for stability (independent of the matching policy) defining the maximal possible stability region. For some bipartite graphs, we prove that the stability region is indeed maximal for any admissible matching policy. For the ML policy, we prove that the stability region is maximal for any bipartite graph. For the MS and PRIORITY policies, we exhibit a bipartite graph with a non-maximal stability region.
We consider a continuous-time, single-type, age-dependent Bellman-Harris branching process. We investigate the limit distribution of the point process A(t)={at,i: 1≤ i≤ Z(t)}, where at,i is the age of the ith individual alive at time t, 1≤ i≤ Z(t), and Z(t) is the population size of individuals alive at time t. Also, if Z(t)≥ k, k≥2, is a positive integer, we pick k individuals from those who are alive at time t by simple random sampling without replacement and trace their lines of descent backward in time until they meet for the first time. Let Dk(t) be the coalescence time (the death time of the last common ancestor) of these k random chosen individuals. We study the distribution of Dk(t) and its limit distribution as t→∞.
In this paper we study absorbing continuous-time Markov decision processes in Polish state spaces with unbounded transition and cost rates, and history-dependent policies. The performance measure is the expected total undiscounted costs. For the unconstrained problem, we show the existence of a deterministic stationary optimal policy, whereas, for the constrained problems with N constraints, we show the existence of a mixed stationary optimal policy, where the mixture is over no more than N+1 deterministic stationary policies. Furthermore, the strong duality result is obtained for the associated linear programs.
Let Xi,i ∈ ℕ, be independent and identically distributed random variables with values in ℕ0. We transform (‘prune’) the sequence {X1,…,Xn},n∈ ℕ, of discrete random samples into a sequence {0,1,2,…,Yn}, n∈ ℕ, of contiguous random sets by replacing Xn+1 with Yn +1 if Xn+1 >Yn. We consider the asymptotic behaviour of Yn as n→∞. Applications include path growth in digital search trees and the number of tables in Pitman's Chinese restaurant process if the latter is conditioned on its limit value.
We consider a generalized telegraph process which follows an alternating renewal process and is subject to random jumps. More specifically, consider a particle at the origin of the real line at time t=0. Then it goes along two alternating velocities with opposite directions, and performs a random jump toward the alternating direction at each velocity reversal. We develop the distribution of the location of the particle at an arbitrary fixed time t, and study this distribution under the assumption of exponentially distributed alternating random times. The cases of jumps having exponential distributions with constant rates and with linearly increasing rates are treated in detail.
We study a risk process with dividend barrier b where the claims arrive according to a Markovian additive process (MAP). For spectrally negative MAPs, we present linear equations for the expected discounted dividends and the expected discounted penalty function. We apply results for the first exit times of spectrally negative Lévy processes and change-of-measure techniques. Explicit expressions are given when there are positive and negative claims, with phase-type distribution.