To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Examining possibilities for the coexistence of two competing populations is a classic problem which dates back to the earliest ‘predator-prey’ models. In this paper we study this problem in the context of a model introduced in Björnberg et al. (2012) for the spread of a virus infection in a population of healthy cells. The infected cells may be seen as a population of ‘predators’ and the healthy cells as a population of ‘prey’. We show that, depending on the parameters defining the model, there may or may not be coexistence of the two populations, and we give precise criteria for this.
In this article we study the parabolic system of equations which is closely related to a multitype branching Brownian motion. Particular attention is paid to the monotone traveling wave solutions of this system. Provided with some moment conditions, we show the existence, uniqueness, and asymptotic behaviors of such waves with speed greater than or equal to a critical value c̲ and nonexistence of such waves with speed smaller than c̲.
We consider backward stochastic differential equations in a setting where noise is generated by a countable state, continuous time Markov chain, and the terminal value is prescribed at a stopping time. We show that, given sufficient integrability of the stopping time and a growth bound on the terminal value and BSDE driver, these equations admit unique solutions satisfying the same growth bound (up to multiplication by a constant). This holds without assuming that the driver is monotone in y, that is, our results do not require that the terminal value be discounted at some uniform rate. We show that the conditions are satisfied for hitting times of states of the chain, and hence present some novel applications of the theory of these BSDEs.
Although the exact expressions for the extinction probabilities of the Interacting Branching Collision Processes (IBCP) were very recently given by Chen et al. [4], some of these expressions are very complicated; hence, useful information regarding asymptotic behaviour, for example, is harder to obtain. Also, these exact expressions take very different forms for different cases and thus seem lacking in homogeneity. In this paper, we show that the asymptotic behaviour of these extremely complicated and tangled expressions for extinction probabilities of IBCP follows an elegant and homogenous power law which takes a very simple form. In fact, we are able to show that if the extinction is not certain then the extinction probabilities {an} follow an harmonious and simple asymptotic law of an ∼ kn-αρcnas n → ∞, where k and α are two constants, ρc is the unique positive zero of the C(s), and C(s) is the generating function of the infinitesimal collision rates. Moreover, the interesting and important quantity α takes a very simple and uniform form which could be interpreted as the ‘spectrum’, ranging from -∞ to +∞, of the interaction between the two components of branching and collision of the IBCP.
We propose a discrete state-space model for storage of urban stormwater in two connected dams using an optimal pump-to-fill policy to transfer water from the capture dam to the holding dam. We assume stochastic supply to the capture dam and independent stochastic demand from the holding dam. We find new analytic formulae to calculate steady-state probabilities for the contents of each dam and thereby enable operators to better understand system behaviour. We illustrate our methods by considering some particular examples and discuss extension of our analysis to a series of three connected dams.
A recent result of Ding, Lee and Peres (2012) expressed the cover time of the random walk on a graph in terms of generic chaining for the commute distance. Their argument is based on Dynkin's isomorphism theorem. The purpose of this article is to present an alternative approach to this problem, based only on elementary hitting time estimates and chaining arguments.
We consider a class of infinite time horizon optimal stopping problems for spectrally negative Lévy processes. Focusing on strategies of threshold type, we write explicit expressions for the corresponding expected payoff via the scale function, and further pursue optimal candidate threshold levels. We obtain and show the equivalence of the continuous/smooth fit condition and the first-order condition for maximization over threshold levels. As examples of its applications, we give a short proof of the McKean optimal stopping problem (perpetual American put option) and solve an extension to Egami and Yamazaki (2013).
Consider a sequence of Markov-dependent trials where each trial produces a letter of a finite alphabet. Given a collection of patterns, we look at this sequence until one of these patterns appears as a run. We show how the method of gambling teams can be employed to compute the probability that a given pattern is the first pattern to occur.
We investigate the distribution of the coalescence time (most recent common ancestor) for two individuals picked at random (uniformly) in the current generation of a continuous-time Bienaymé-Galton-Watson process founded t units of time ago. We also obtain limiting distributions as t → ∞ in the subcritical case. We extend our results for two individuals to the joint distribution of coalescence times for any finite number of individuals sampled in the current generation.
We consider a one-dimensional discrete symmetric random walk with a reflecting boundary at the origin. Generating functions are found for the two-dimensional probability distribution P{Sn = x, max1≤j≤nSn = a} of being at position x after n steps, while the maximal location that the walker has achieved during these n steps is a. We also obtain the familiar (marginal) one-dimensional distribution for Sn = x, but more importantly that for max1≤j≤nSj = a asymptotically at fixed a2 / n. We are able to compute and compare the expectations and variances of the two one-dimensional distributions, finding that they have qualitatively similar forms, but differ quantitatively in the anticipated fashion.
Both small-world models of random networks with occasional long-range connections and gossip processes with occasional long-range transmission of information have similar characteristic behaviour. The long-range elements appreciably reduce the effective distances, measured in space or in time, between pairs of typical points. In this paper we show that their common behaviour can be interpreted as a product of the locally branching nature of the models. In particular, it is shown that both typical distances between points and the proportion of space that can be reached within a given distance or time can be approximated by formulae involving the limit random variable of the branching process.
A probabilistic cellular automaton (PCA) can be viewed as a Markov chain. The cells are updated synchronously and independently, according to a distribution depending on a finite neighborhood. We investigate the ergodicity of this Markov chain. A classical cellular automaton is a particular case of PCA. For a one-dimensional cellular automaton, we prove that ergodicity is equivalent to nilpotency, and is therefore undecidable. We then propose an efficient perfect sampling algorithm for the invariant measure of an ergodic PCA. Our algorithm does not assume any monotonicity property of the local rule. It is based on a bounding process which is shown to also be a PCA. Last, we focus on the PCA majority, whose asymptotic behavior is unknown, and perform numerical experiments using the perfect sampling procedure.
We consider possibly nonlinear distributional fixed-point equations on weighted branching trees, which include the well-known linear branching recursion. In Jelenković and Olvera-Cravioto (2012), an implicit renewal theorem was developed that enables the characterization of the power-tail asymptotics of the solutions to many equations that fall into this category. In this paper we complement the analysis in our 2012 paper to provide the corresponding rate of convergence.
This paper provides an analysis on a discrete version of the Ornstein-Uhlenbeck (OU) process which reflects the small discrete movements caused by the tick size effect. This discrete OU process is derived from matching the first two moments to those of the standard OU process in an infinitesimal sense. We discuss the distributional convergence from the discrete to the continuous processes, and show that the convergence speed is in the second order of the step (tick) size. We also provide some analytical results for the proposed discrete OU process itself, including the closed-form formula of the moment generating function and a full characterisation of the steady state distribution. These results enable us to examine the convergence order explicitly.
In this paper we adopt the perturbation approach of Landriault, Renaud and Zhou (2011) to find expressions for the joint Laplace transforms of occupation times for time-homogeneous diffusion processes. The expressions are in terms of solutions to the associated differential equations. These Laplace transforms are applied to study ruin-related problems for several classes of diffusion risk processes.
In this paper we study nonparametric estimation problems for a class of piecewise-deterministic Markov processes (PDMPs). Borovkov and Last (2008) proved a version of Rice's formula for PDMPs, which explains the relation between the stationary density and the level crossing intensity. From a statistical point of view, their result suggests a methodology for estimating the stationary density from observations of a sample path of PDMPs. First, we introduce the local time related to the level crossings and construct the local-time estimator for the stationary density, which is unbiased and uniformly consistent. Secondly, we investigate other estimation problems for the jump intensity and the conditional jump size distribution.
We make a connection between the continuous time and lazy discrete time Markov chains through the comparison of cutoffs and mixing time in total variation distance. For illustration, we consider finite birth and death chains and provide a criterion on cutoffs using eigenvalues of the transition matrix.
When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable.
In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.
The problem of constructing an optimal coadapted coupling for a pair of symmetric random walks on Z2d was considered by Connor and Jacka (2008), and the existence of a coupling which is stochastically fastest in the class of all such coadapted couplings was demonstrated. In this paper we show how to generalise this construction to an optimal coadapted coupling for the continuous-time symmetric random walk on Knd, where Kn is the complete graph with n vertices. Moreover, we show that although this coupling is not maximal for any n (i.e. it does not achieve equality in the coupling inequality), it does tend to a maximal coupling as n → ∞.