We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider linear-fractional branching processes (one-type and two-type) with immigration in varying environments. For $n\ge0$, let $Z_n$ count the number of individuals of the nth generation, which excludes the immigrant who enters the system at time n. We call n a regeneration time if $Z_n=0$. For both the one-type and two-type cases, we give criteria for the finiteness or infiniteness of the number of regeneration times. We then construct some concrete examples to exhibit the strange phenomena caused by the so-called varying environments. For example, it may happen that the process is extinct, but there are only finitely many regeneration times. We also study the asymptotics of the number of regeneration times of the model in the example.
For a partially specified stochastic matrix, we consider the problem of completing it so as to minimize Kemeny’s constant. We prove that for any partially specified stochastic matrix for which the problem is well defined, there is a minimizing completion that is as sparse as possible. We also find the minimum value of Kemeny’s constant in two special cases: when the diagonal has been specified and when all specified entries lie in a common row.
We study the quasi-ergodicity of compact strong Feller semigroups $U_t$, $t> 0$, on $L^2(M,\mu )$; we assume that M is a locally compact Polish space equipped with a locally finite Borel measue $\mu $. The operators $U_t$ are ultracontractive and positivity preserving, but not necessarily self-adjoint or normal. We are mainly interested in those cases where the measure $\mu $ is infinite and the semigroup is not intrinsically ultracontractive. We relate quasi-ergodicity on $L^p(M,\mu )$ and uniqueness of the quasi-stationary measure with the finiteness of the heat content of the semigroup (for large values of t) and with the progressive uniform ground state domination property. The latter property is equivalent to a variant of quasi-ergodicity which progressively propagates in space as $t \uparrow \infty $; the propagation rate is determined by the decay of . We discuss several applications and illustrate our results with examples. This includes a complete description of quasi-ergodicity for a large class of semigroups corresponding to non-local Schrödinger operators with confining potentials.
In this note we provide an upper bound for the difference between the value function of a distributionally robust Markov decision problem and the value function of a non-robust Markov decision problem, where the ambiguity set of probability kernels of the distributionally robust Markov decision process is described by a Wasserstein ball around some reference kernel whereas the non-robust Markov decision process behaves according to a fixed probability kernel contained in the ambiguity set. Our derived upper bound for the difference between the value functions is dimension-free and depends linearly on the radius of the Wasserstein ball.
We revisit processes generated by iterated random functions driven by a stationary and ergodic sequence. Such a process is called strongly stable if a random initialization exists for which the process is stationary and ergodic, and for any other initialization the difference of the two processes converges to zero almost surely. Under some mild conditions on the corresponding recursive map, without any condition on the driving sequence we show the strong stability of iterations. Several applications are surveyed such as generalized autoregression and queuing. Furthermore, new results are deduced for Langevin-type iterations with dependent noise and for multitype branching processes.
Random bridges have gained significant attention in recent years due to their potential applications in various areas, particularly in information-based asset pricing models. This paper aims to explore the potential influence of the pinning point’s distribution on the memorylessness and stochastic dynamics of the bridge process. We introduce Lévy bridges with random length and random pinning points, and analyze their Markov property. Our study demonstrates that the Markov property of Lévy bridges depends on the nature of the distribution of their pinning points. The law of any random variables can be decomposed into singular continuous, discrete, and absolutely continuous parts with respect to the Lebesgue measure (Lebesgue’s decomposition theorem). We show that the Markov property holds when the pinning points’ law does not have an absolutely continuous part. Conversely, the Lévy bridge fails to exhibit Markovian behavior when the pinning point has an absolutely continuous part.
We consider continuous-state branching processes (CB processes) which become extinct almost surely. First, we tackle the problem of describing the stationary measures on $(0,+\infty)$ for such CB processes. We give a representation of the stationary measure in terms of scale functions of related Lévy processes. Then we prove that the stationary measure can be obtained from the vague limit of the potential measure, and, in the critical case, can also be obtained from the vague limit of a normalized transition probability. Next, we prove some limit theorems for the CB process conditioned on extinction in a near future and on extinction at a fixed time. We obtain non-degenerate limit distributions which are of the size-biased type of the stationary measure in the critical case and of the Yaglom distribution in the subcritical case. Finally we explore some further properties of the limit distributions.
We are interested in the law of the first passage time of an Ornstein–Uhlenbeck process to time-varying thresholds. We show that this problem is connected to the laws of the first passage time of the process to members of a two-parameter family of functional transformations of a time-varying boundary. For specific values of the parameters, these transformations appear in a realisation of a standard Ornstein–Uhlenbeck bridge. We provide three different proofs of this connection. The first is based on a similar result for Brownian motion, the second uses a generalisation of the so-called Gauss–Markov processes, and the third relies on the Lie group symmetry method. We investigate the properties of these transformations and study the algebraic and analytical properties of an involution operator which is used in constructing them. We also show that these transformations map the space of solutions of Sturm–Liouville equations into the space of solutions of the associated nonlinear ordinary differential equations. Lastly, we interpret our results through the method of images and give new examples of curves with explicit first passage time densities.
We investigate some aspects of the problem of the estimation of birth distributions (BDs) in multi-type Galton–Watson trees (MGWs) with unobserved types. More precisely, we consider two-type MGWs called spinal-structured trees. This kind of tree is characterized by a spine of special individuals whose BD $\nu$ is different from the other individuals in the tree (called normal, and whose BD is denoted by $\mu$). In this work, we show that even in such a very structured two-type population, our ability to distinguish the two types and estimate $\mu$ and $\nu$ is constrained by a trade-off between the growth-rate of the population and the similarity of $\mu$ and $\nu$. Indeed, if the growth-rate is too large, large deviation events are likely to be observed in the sampling of the normal individuals, preventing us from distinguishing them from special ones. Roughly speaking, our approach succeeds if $r\lt \mathfrak{D}(\mu,\nu)$, where r is the exponential growth-rate of the population and $\mathfrak{D}$ is a divergence measuring the dissimilarity between $\mu$ and $\nu$.
We consider two continuous-time generalizations of conservative random walks introduced in Englander and Volkov (2022), an orthogonal and a spherically symmetrical one; the latter model is also known as random flights. For both models, we show the transience of the walks when $d\ge 2$ and that the rate of direction changing follows a power law $t^{-\alpha}$, $0<\alpha\le 1$, or the law $(\!\ln t)^{-\beta}$ where $\beta>2$.
We review criteria for comparing the efficiency of Markov chain Monte Carlo (MCMC) methods with respect to the asymptotic variance of estimates of expectations of functions of state, and show how such criteria can justify ways of combining improvements to MCMC methods. We say that a chain on a finite state space with transition matrix P efficiency-dominates one with transition matrix Q if for every function of state it has lower (or equal) asymptotic variance. We give elementary proofs of some previous results regarding efficiency dominance, leading to a self-contained demonstration that a reversible chain with transition matrix P efficiency-dominates a reversible chain with transition matrix Q if and only if none of the eigenvalues of $Q-P$ are negative. This allows us to conclude that modifying a reversible MCMC method to improve its efficiency will also improve the efficiency of a method that randomly chooses either this or some other reversible method, and to conclude that improving the efficiency of a reversible update for one component of state (as in Gibbs sampling) will improve the overall efficiency of a reversible method that combines this and other updates. It also explains how antithetic MCMC can be more efficient than independent and identically distributed sampling. We also establish conditions that can guarantee that a method is not efficiency-dominated by any other method.
We consider a Markov control model with Borel state space, metric compact action space, and transitions assumed to have a density function with respect to some probability measure satisfying some continuity conditions. We study the optimization problem of maximizing the probability of visiting some subset of the state space infinitely often, and we show that there exists an optimal stationary Markov policy for this problem. We endow the set of stationary Markov policies and the family of strategic probability measures with adequate topologies (namely, the narrow topology for Young measures and the $ws^\infty$-topology, respectively) to obtain compactness and continuity properties, which allow us to obtain our main results.
In this paper, we consider a joint drift rate control and two-sided impulse control problem in which the system manager adjusts the drift rate as well as the instantaneous relocation for a Brownian motion, with the objective of minimizing the total average state-related cost and control cost. The system state can be negative. Assuming that instantaneous upward and downward relocations take a different cost structure, which consists of both a setup cost and a variable cost, we prove that the optimal control policy takes an $\left\{ {\!\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right),\!\left\{ {{\mu ^{\ast}}(x)\,:\,x \in [ {{s^{\ast}},{S^{\ast}}}]} \right\}} \right\}$ form. Specifically, the optimal impulse control policy is characterized by a quadruple $\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right)$, under which the system state will be immediately relocated upwardly to ${q^{\ast}}$ once it drops to ${s^{\ast}}$ and be immediately relocated downwardly to ${Q^{\ast}}$ once it rises to ${S^{\ast}}$; the optimal drift rate control policy will depend solely on the current system state, which is characterized by a function ${\mu ^{\ast}}\!\left( \cdot \right)$ for the system state staying in $[ {{s^{\ast}},{S^{\ast}}}]$. By analyzing an associated free boundary problem consisting of an ordinary differential equation and several free boundary conditions, we obtain these optimal policy parameters and show the optimality of the proposed policy using a lower-bound approach. Finally, we investigate the effect of the system parameters on the optimal policy parameters as well as the optimal system’s long-run average cost numerically.
Continuous-time Markov chains are frequently used to model the stochastic dynamics of (bio)chemical reaction networks. However, except in very special cases, they cannot be analyzed exactly. Additionally, simulation can be computationally intensive. An approach to address these challenges is to consider a more tractable diffusion approximation. Leite and Williams (Ann. Appl. Prob.29, 2019) proposed a reflected diffusion as an approximation for (bio)chemical reaction networks, which they called the constrained Langevin approximation (CLA) as it extends the usual Langevin approximation beyond the first time some chemical species becomes zero in number. Further explanation and examples of the CLA can be found in Anderson et al. (SIAM Multiscale Modeling Simul.17, 2019).
In this paper, we extend the approximation of Leite and Williams to (nearly) density-dependent Markov chains, as a first step to obtaining error estimates for the CLA when the diffusion state space is one-dimensional, and we provide a bound for the error in a strong approximation. We discuss some applications for chemical reaction networks and epidemic models, and illustrate these with examples. Our method of proof is designed to generalize to higher dimensions, provided there is a Lipschitz Skorokhod map defining the reflected diffusion process. The existence of such a Lipschitz map is an open problem in dimensions more than one.
It is known that the simple slice sampler has robust convergence properties; however, the class of problems where it can be implemented is limited. In contrast, we consider hybrid slice samplers which are easily implementable and where another Markov chain approximately samples the uniform distribution on each slice. Under appropriate assumptions on the Markov chain on the slice, we give a lower bound and an upper bound of the spectral gap of the hybrid slice sampler in terms of the spectral gap of the simple slice sampler. An immediate consequence of this is that the spectral gap and geometric ergodicity of the hybrid slice sampler can be concluded from the spectral gap and geometric ergodicity of the simple version, which is very well understood. These results indicate that robustness properties of the simple slice sampler are inherited by (appropriately designed) easily implementable hybrid versions. We apply the developed theory and analyze a number of specific algorithms, such as the stepping-out shrinkage slice sampling, hit-and-run slice sampling on a class of multivariate targets, and an easily implementable combination of both procedures on multidimensional bimodal densities.
It is proved that for families of stochastic operators on a countable tensor product, depending smoothly on parameters, any spectral projection persists smoothly, where smoothness is defined using norms based on ideas of Dobrushin. A rigorous perturbation theory for families of stochastic operators with spectral gap is thereby created. It is illustrated by deriving an effective slow two-state dynamics for a three-state probabilistic cellular automaton.
All vital functions of living cells rely on the production of various functional molecules through gene expression. The production periods are burst-like and stochastic due to the discrete nature of biochemical reactions. In certain contexts, the concentrations of RNA or protein require regulation to maintain a fine internal balance within the cell. Here we consider a motif of two types of RNA molecules – mRNA and an antagonistic microRNA – which are encoded by a shared coding sequence and form a feed forward loop (FFL). This control mechanism is shown to be perfectly adapting in the deterministic context. We demonstrate that the adaptation (of the mean value) becomes imperfect if production occurs in random bursts. The FFL nevertheless outperforms the benchmark feedback loop in terms of counterbalancing variations in the signal. Methodologically, we adapt a hybrid stochastic model, which has widely been used to model a single regulatory molecule, to the current case of a motif involving two species; the use of the Laplace transform thereby circumvents the problem of moment closure that arises owing to the mRNA–microRNA interaction. We expect that the approach can be applicable to other systems with nonlinear kinetics.
We consider a Poisson autoregressive process whose parameters depend on the past of the trajectory. We allow these parameters to take negative values, modelling inhibition. More precisely, the model is the stochastic process $(X_n)_{n\ge0}$ with parameters $a_1,\ldots,a_p \in \mathbb{R}$, $p\in\mathbb{N}$, and $\lambda \ge 0$, such that, for all $n\ge p$, conditioned on $X_0,\ldots,X_{n-1}$, $X_n$ is Poisson distributed with parameter $(a_1 X_{n-1} + \cdots + a_p X_{n-p} + \lambda)_+$. This process can be regarded as a discrete-time Hawkes process with inhibition and a memory of length p. In this paper we initiate the study of necessary and sufficient conditions of stability for these processes, which seems to be a hard problem in general. We consider specifically the case $p = 2$, for which we are able to classify the asymptotic behavior of the process for the whole range of parameters, except for boundary cases. In particular, we show that the process remains stochastically bounded whenever the solution to the linear recurrence equation $x_n = a_1x_{n-1} + a_2x_{n-2} + \lambda$ remains bounded, but the converse is not true. Furthermore, the criterion for stochastic boundedness is not symmetric in $a_1$ and $a_2$, in contrast to the case of non-negative parameters, illustrating the complex effects of inhibition.
By the technique of augmented truncations, we obtain the perturbation bounds on the distance of the finite-time state distributions of two continuous-time Markov chains (CTMCs) in a type of weaker norm than the V-norm. We derive the estimates for strongly and exponentially ergodic CTMCs. In particular, we apply these results to get the bounds for CTMCs satisfying Doeblin or stochastically monotone conditions. Some examples are presented to illustrate the limitation of the V-norm in perturbation analysis and to show the quality of the weak norm.
We investigate branching processes in varying environment, for which $\overline{f}_n \to 1$ and $\sum_{n=1}^\infty (1-\overline{f}_n)_+ = \infty$, $\sum_{n=1}^\infty (\overline{f}_n - 1)_+ < \infty$, where $\overline{f}_n$ stands for the offspring mean in generation n. Since subcritical regimes dominate, such processes die out almost surely, therefore to obtain a nontrivial limit we consider two scenarios: conditioning on nonextinction, and adding immigration. In both cases we show that the process converges in distribution without normalization to a nondegenerate compound-Poisson limit law. The proofs rely on the shape function technique, worked out by Kersting (2020).