We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We discuss a continuous-time Markov branching model in which each individual can trigger an alarm according to a Poisson process. The model is stopped when a given number of alarms is triggered or when there are no more individuals present. Our goal is to determine the distribution of the state of the population at this stopping time. In addition, the state distribution at any fixed time is also obtained. The model is then modified to take into account the possible influence of death cases. All distributions are derived using probability-generating functions, and the approach followed is based on the construction of families of martingales.
We study the long-term behaviour of a random walker embedded in a growing sequence of graphs. We define a (generally non-Markovian) real-valued stochastic process, called the knowledge process, that represents the ratio between the number of vertices already visited by the walker and the current size of the graph. We mainly focus on the case where the underlying graph sequence is the growing sequence of complete graphs.
It has been known for nearly a decade that deterministically modeled reaction networks that are weakly reversible and consist of a single linkage class have trajectories that are bounded from both above and below by positive constants (so long as the initial condition has strictly positive components). It is conjectured that the stochastically modeled analogs of these systems are positive recurrent. We prove this conjecture in the affirmative under the following additional assumptions: (i) the system is binary, and (ii) for each species, there is a complex (vertex in the associated reaction diagram) that is a multiple of that species. To show this result, a new proof technique is developed in which we study the recurrence properties of the n-step embedded discrete-time Markov chain.
This article investigates the long-time behavior of conservative affine processes on the cone of symmetric positive semidefinite
$d\times d$
matrices. In particular, for conservative and subcritical affine processes we show that a finite
$\log$
-moment of the state-independent jump measure is sufficient for the existence of a unique limit distribution. Moreover, we study the convergence rate of the underlying transition kernel to the limit distribution: first, in a specific metric induced by the Laplace transform, and second, in the Wasserstein distance under a first moment assumption imposed on the state-independent jump measure and an additional condition on the diffusion parameter.
Perron–Frobenius theory developed for irreducible non-negative kernels deals with so-called R-positive recurrent kernels. If the kernel M is R-positive recurrent, then the main result determines the limit of the scaled kernel iterations
$R^nM^n$
as
$n\to\infty$
. In Nummelin (1984) this important result is proven using a regeneration method whose major focus is on M having an atom. In the special case when
$M=P$
is a stochastic kernel with an atom, the regeneration method has an elegant explanation in terms of an associated split chain. In this paper we give a new probabilistic interpretation of the general regeneration method in terms of multi-type Galton–Watson processes producing clusters of particles. Treating clusters as macro-individuals, we arrive at a single-type Crump–Mode–Jagers process with a naturally embedded renewal structure.
Yuval Peres and Perla Sousi showed that the mixing times and average mixing times of reversible Markov chains on finite state spaces are equal up to some universal multiplicative constant. We use tools from nonstandard analysis to extend this result to reversible Markov chains on compact state spaces that satisfy the strong Feller property.
We give a fully polynomial-time randomized approximation scheme (FPRAS) for the number of bases in bicircular matroids. This is a natural class of matroids for which counting bases exactly is #P-hard and yet approximate counting can be done efficiently.
We study a continuous-time branching random walk (BRW) on the lattice ℤd, d ∈ ℕ, with a single source of branching, that is the lattice point where the birth and death of particles can occur. The random walk is assumed to be spatially homogeneous, symmetric and irreducible but, in contrast to the majority of previous investigations, the random walk transition intensities a(x, y) decrease as |y − x|−(d+α) for |y − x| → ∞, where α ∈ (0, 2), that leads to an infinite variance of the random walk jumps. The mechanism of the birth and death of particles at the source is governed by a continuous-time Markov branching process. The source intensity is characterized by a certain parameter β. We calculate the long-time asymptotic behaviour for all integer moments for the number of particles at each lattice point and for the total population size. With respect to the parameter β, a non-trivial critical point βc > 0 is found for every d ≥ 1. In particular, if β > βc the evolutionary operator generated a behaviour of the first moment for the number of particles has a positive eigenvalue. The existence of a positive eigenvalue yields an exponential growth in t of the particle numbers in the case β > βc called supercritical. Classification of the BRW treated as subcritical (β < βc) or critical (β = βc) for the heavy-tailed random walk jumps is more complicated than for a random walk with a finite variance of jumps. We study the asymptotic behaviour of all integer moments of a number of particles at any point y ∈ ℤd and of the particle population on ℤd according to the ratio d/α.
We study the limit behaviour of a class of random walk models taking values in the standard d-dimensional (
$d\ge 1$
) simplex. From an interior point z, the process chooses one of the
$d+1$
vertices of the simplex, with probabilities depending on z, and then the particle randomly jumps to a new location z′ on the segment connecting z to the chosen vertex. In some special cases, using properties of the Beta distribution, we prove that the limiting distributions of the Markov chain are Dirichlet. We also consider a related history-dependent random walk model in [0, 1] based on an urn-type scheme. We show that this random walk converges in distribution to an arcsine random variable.
For a one-dimensional smooth vector field in a neighborhood of an unstable equilibrium, we consider the associated dynamics perturbed by small noise. We give a revealing elementary proof of a result proved earlier using heavy machinery from Malliavin calculus. In particular, we obtain precise vanishing noise asymptotics for the tail of the exit time and for the exit distribution conditioned on atypically long exits. We also discuss our program on rare transitions in noisy heteroclinic networks.
Let X be an Ornstein–Uhlenbeck process driven by a Brownian motion. We propose an expression for the joint density / distribution function of the process and its running supremum. This law is expressed as an expansion involving parabolic cylinder functions. Numerically, we obtain this law faster with our expression than with a Monte Carlo method. Numerical applications illustrate the interest of this result.
Both sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an
$\mathbb{L}_r$
-inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
This paper investigates the random horizon optimal stopping problem for measure-valued piecewise deterministic Markov processes (PDMPs). This is motivated by population dynamics applications, when one wants to monitor some characteristics of the individuals in a small population. The population and its individual characteristics can be represented by a point measure. We first define a PDMP on a space of locally finite measures. Then we define a sequence of random horizon optimal stopping problems for such processes. We prove that the value function of the problems can be obtained by iterating some dynamic programming operator. Finally we prove via a simple counter-example that controlling the whole population is not equivalent to controlling a random lineage.
In this paper, a reflected stochastic differential equation (SDE) with jumps is studied for the case where the constraint acts on the law of the solution rather than on its paths. These reflected SDEs have been approximated by Briand et al. (2016) using a numerical scheme based on particles systems, when no jumps occur. The main contribution of this paper is to prove the existence and the uniqueness of the solutions to this kind of reflected SDE with jumps and to generalize the results obtained by Briand et al. (2016) to this context.
For a continuous-time random walk X = {Xt, t ⩾ 0} (in general non-Markov), we study the asymptotic behaviour, as t → ∞, of the normalized additive functional $c_t\int _0^{t} f(X_s)\,{\rm d}s$, t⩾ 0. Similarly to the Markov situation, assuming that the distribution of jumps of X belongs to the domain of attraction to α-stable law with α > 1, we establish the convergence to the local time at zero of an α-stable Lévy motion. We further study a situation where X is delayed by a random environment given by the Poisson shot-noise potential: $\Lambda (x,\gamma )= {\rm e}^{-\sum _{y\in \gamma } \phi (x-y)},$ where $\phi \colon \mathbb R\to [0,\infty )$ is a bounded function decaying sufficiently fast, and γ is a homogeneous Poisson point process, independent of X. We find that in this case the weak limit has both ‘quenched’ component depending on Λ, and a component, where Λ is ‘averaged’.
The sequence of prime numbers p for which a variety over ℚ has no p-adic point plays a fundamental role in arithmetic geometry. This sequence is deterministic, however, we prove that if we choose a typical variety from a family then the sequence has random behaviour. We furthermore prove that this behaviour is modelled by a random walk in Brownian motion. This has several consequences, one of them being the description of the finer properties of the distribution of the primes in this sequence via the Feynman–Kac formula.
Let $\unicode[STIX]{x1D703}$ be an irrational real number. The map $T_{\unicode[STIX]{x1D703}}:y\mapsto (y+\unicode[STIX]{x1D703})\!\hspace{0.6em}{\rm mod}\hspace{0.2em}1$ from the unit interval $\mathbf{I}= [\!0,1\![$ (endowed with the Lebesgue measure) to itself is ergodic. In a short paper [Parry, Automorphisms of the Bernoulli endomorphism and a class of skew-products. Ergod. Th. & Dynam. Sys.16 (1996), 519–529] published in 1996, Parry provided an explicit isomorphism between the measure-preserving map $[T_{\unicode[STIX]{x1D703}},\text{Id}]$ and the unilateral dyadic Bernoulli shift when $\unicode[STIX]{x1D703}$ is extremely well approximated by the rational numbers, namely, if
A few years later, Hoffman and Rudolph [Uniform endomorphisms which are isomorphic to a Bernoulli shift. Ann. of Math. (2)156 (2002), 79–101] showed that for every irrational number, the measure-preserving map $[T_{\unicode[STIX]{x1D703}},\text{Id}]$ is isomorphic to the unilateral dyadic Bernoulli shift. Their proof is not constructive. In the present paper, we relax notably Parry’s condition on $\unicode[STIX]{x1D703}$: the explicit map provided by Parry’s method is an isomorphism between the map $[T_{\unicode[STIX]{x1D703}},\text{Id}]$ and the unilateral dyadic Bernoulli shift whenever
where $[0;a_{1},a_{2},\ldots ]$ is the continued fraction expansion and $(p_{n}/q_{n})_{n\geq 0}$ the sequence of convergents of $\Vert \unicode[STIX]{x1D703}\Vert :=\text{dist}(\unicode[STIX]{x1D703},\mathbb{Z})$. Whether Parry’s map is an isomorphism for every $\unicode[STIX]{x1D703}$ or not is still an open question, although we expect a positive answer.
A new approach to the problem of finding the distribution of integral functionals under the excursion measure is presented. It is based on the technique of excursion straddling a time, stochastic analysis, and calculus on local time, and it is done for Brownian motion with drift reflecting at 0, and under some additional assumptions for some class of Itó diffusions. The new method is an alternative to the classical potential-theoretic approach and gives new specific formulas for distributions under the excursion measure.
We establish an invariance principle and a large deviation principle for a biased random walk
${\text{RW}}_\lambda$
with
$\lambda\in [0,1)$
on
$\mathbb{Z}^d$
. The scaling limit in the invariance principle is not a d-dimensional Brownian motion. For the large deviation principle, its rate function is different from that of a drifted random walk, as may be expected, though the reflected biased random walk evolves like the drifted random walk in the interior of the first quadrant and almost surely visits coordinate planes finitely many times.
We investigate the long-time behavior of the Ornstein–Uhlenbeck process driven by Lévy noise with regime switching. We provide explicit criteria on the transience and recurrence of this process. Contrasted with the Ornstein–Uhlenbeck process driven simply by Brownian motion, whose stationary distribution must be light-tailed, both the jumps caused by the Lévy noise and the regime switching described by a Markov chain can derive the heavy-tailed property of the stationary distribution. The different role played by the Lévy measure and the regime-switching process is clearly characterized.