To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is notsatisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
We observe that the technique of Markov contraction can be used to establish measure concentration for a broad class of noncontracting chains. In particular, geometric ergodicity provides a simple and versatile framework. This leads to a short, elementary proof of a general concentration inequality for Markov and hidden Markov chains, which supersedes some of the known results and easily extends to other processes such as Markov trees. As applications, we provide a Dvoretzky-Kiefer-Wolfowitz-type inequality and a uniform Chernoff bound. All of our bounds are dimension-free and hold for countably infinite state spaces.
This paper considers the average optimality for a continuous-time Markov decision process in Borel state and action spaces, and with an arbitrarily unbounded nonnegative cost rate. The existence of a deterministic stationary optimal policy is proved under the conditions that allow the following; the controlled process can be explosive, the transition rates are weakly continuous, and the multifunction defining the admissible action spaces can be neither compact-valued nor upper semicontinuous.
The probability h(n, m) that the block counting process of the Bolthausen-Sznitman n-coalescent ever visits the state m is analyzed. It is shown that the asymptotic hitting probabilities h(m) = limn→∞h(n, m), m ∈ N, exist and an integral formula for h(m) is provided. The proof is based on generating functions and exploits a certain convolution property of the Bolthausen-Sznitman coalescent. It follows that h(m) ∼ 1/log m as m → ∞. An application to linear recursions is indicated.
The stochastic sequential assignment problem assigns distinct workers to sequentially arriving tasks with stochastic parameters. In this paper the assignments are performed so as to minimize the threshold probability, which is the probability of the long-run reward per task failing to achieve a target value (threshold). As the number of tasks approaches infinity, the problem is studied for independent and identically distributed (i.i.d.) tasks with a known distribution function and also for tasks that are derived from r distinct unobservable distributions (governed by a Markov chain). Stationary optimal policies are presented, which simultaneously minimize the threshold probability and achieve the optimal long-run expected reward per task.
The main aim of this paper is to prove the quenched central limit theorem for reversible random walks in a stationary random environment on Z without having the integrability condition on the conductance and without using any martingale. The method shown here is particularly simple and was introduced by Depauw and Derrien [3]. More precisely, for a given realization ω of the environment, we consider the Poisson equation (Pω - I)g = f, and then use the pointwise ergodic theorem in [8] to treat the limit of solutions and then the central limit theorem will be established by the convergence of moments. In particular, there is an analogue to a Markov process with discrete space and the diffusion in a stationary random environment.
We provide asymptotics for the range $R_{n}$ of a random walk on the $d$-dimensional lattice indexed by a random tree with $n$ vertices. Using Kingman’s subadditive ergodic theorem, we prove under general assumptions that $n^{-1}R_{n}$ converges to a constant, and we give conditions ensuring that the limiting constant is strictly positive. On the other hand, in dimension $4$, and in the case of a symmetric random walk with exponential moments, we prove that $R_{n}$ grows like $n/\!\log n$. We apply our results to asymptotics for the range of a branching random walk when the initial size of the population tends to infinity.
In this article we show the asymptotics of distribution and moments of the size Xn of the minimal clade of a randomly chosen individual in a Bolthausen-Sznitman n-coalescent for n → ∞. The Bolthausen-Sznitman n-coalescent is a Markov process taking states in the set of partitions of {1, …, n}, where 1, …, n are referred to as individuals. The minimal clade of an individual is the equivalence class the individual is in at the time of the first coalescence event this individual participates in. We also provide exact formulae for the distribution of Xn. The main tool used is the connection of the Bolthausen-Sznitman n-coalescent with random recursive trees introduced by Goldschmidt and Martin (2005). With it, we show that Xn - 1 is distributed as the size of a uniformly chosen table in a standard Chinese restaurant process with n - 1 customers.
In this paper we discuss the decay properties of Markov branching processes with disasters, including the decay parameter, invariant measures, and quasistationary distributions. After showing that the corresponding q-matrix Q is always regular and, thus, that the Feller minimal Q-process is honest, we obtain the exact value of the decay parameter λC. We show that the decay parameter can be easily expressed explicitly. We further show that the Markov branching process with disaster is always λC-positive. The invariant vectors, the invariant measures, and the quasidistributions are given explicitly.
In this work we introduce a stochastic model for the spread of a virus in a cell population where the virus has two ways of spreading: either by allowing its host cell to live and duplicate, or by multiplying in large numbers within the host cell, causing the host cell to burst and thereby let the virus enter new uninfected cells. The model is a kind of interacting Markov branching process. We focus in particular on the probability that the virus population survives and how this depends on a certain parameter λ which quantifies the ‘aggressiveness’ of the virus. Our main goal is to determine the optimal balance between aggressive growth and long-term success. Our analysis shows that the optimal strategy of the virus (in terms of survival) is obtained when the virus has no effect on the host cell's life cycle, corresponding to λ = 0. This is in agreement with experimental data about real viruses.
We consider a class of optimal stopping problems involving both the running maximum as well as the prevailing state of a linear diffusion. Instead of tackling the problem directly via the standard free boundary approach, we take an alternative route and present a parameterized family of standard stopping problems of the underlying diffusion. We apply this family to delineate circumstances under which the original problem admits a unique, well-defined solution. We then develop a discretized approach resulting in a numerical algorithm for solving the considered class of stopping problems. We illustrate the use of the algorithm in both a geometric Brownian motion and a mean reverting diffusion setting.
Age-dependent branching processes are increasingly used in analyses of biological data. Despite being central to most statistical procedures, the identifiability of these models has not been studied. In this paper we partition a family of age-dependent branching processes into equivalence classes over which the distribution of the population size remains identical. This result can be used to study identifiability of the offspring and lifespan distributions for parametric families of branching processes. For example, we identify classes of Markov processes that are not identifiable. We show that age-dependent processes with (nonexponential) gamma-distributed lifespans are identifiable and that Smith-Martin processes are not always identifiable.
We consider a model for a one-sided limit order book proposed by Lakner, Reed and Stoikov (2013). We show that it can be coupled with a branching random walk and use this coupling to answer a nontrivial question about the long-term behavior of the price. The coupling relies on a classical idea of enriching the state space by artificially creating a filiation, in this context between orders of the book, which we believe has the potential of being useful for a broader class of models.
We establish recurrence and transience criteria for critical branching processes in random environments with immigration. These results are then applied to the recurrence and transience of a recurrent random walk in a random environment on ℤ disturbed by cookies inducing a drift to the right of strength 1.
We study the existence of a unique stationary distribution and ergodicity for a two-dimensional affine process. Its first coordinate process is supposed to be a so-called α-root process with α ∈ (1, 2]. We prove the existence of a unique stationary distribution for the affine process in the α ∈ (1, 2] case; furthermore, we show ergodicity in the α = 2 case.
We study the long-time behaviour of a Markov process evolving in N and conditioned not to hit 0. Assuming that the process comes back quickly from ∞, we prove that the process admits a unique quasistationary distribution (in particular, the distribution of the conditioned process admits a limit when time goes to ∞). Moreover, we prove that the distribution of the process converges exponentially fast in the total variation norm to its quasistationary distribution and we provide a bound for the rate of convergence. As a first application of our result, we bring a new insight on the speed of convergence to the quasistationary distribution for birth-and-death processes: we prove that starting from any initial distribution the conditional probability converges in law to a unique distribution ρ supported in N* if and only if the process has a unique quasistationary distribution. Moreover, ρ is this unique quasistationary distribution and the convergence is shown to be exponentially fast in the total variation norm. Also, considering the lack of results on quasistationary distributions for nonirreducible processes on countable spaces, we show, as a second application of our result, the existence and uniqueness of a quasistationary distribution for a class of possibly nonirreducible processes.
We study optimal stopping problems related to the pricing of perpetual American options in an extension of the Black-Merton-Scholes model in which the dividend and volatility rates of the underlying risky asset depend on the running values of its maximum and maximum drawdown. The optimal stopping times of the exercise are shown to be the first times at which the price of the underlying asset exits some regions restricted by certain boundaries depending on the running values of the associated maximum and maximum drawdown processes. We obtain closed-form solutions to the equivalent free-boundary problems for the value functions with smooth fit at the optimal stopping boundaries and normal reflection at the edges of the state space of the resulting three-dimensional Markov process. We derive first-order nonlinear ordinary differential equations for the optimal exercise boundaries of the perpetual American standard options.
The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.
We consider homogeneous STIT tessellations Y in the ℓ-dimensional Euclidean space ℝℓ and show the triviality of the tail σ-algebra. This is a sharpening of the mixing result by Lachièze-Rey (2001).
Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.