To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent investigations have argued that there is a simple explicit representation for the Kolmogorov constant c associated with the subcritical Galton–Watson branching process. We exhibit examples showing that although this representation can be valid, it more often is not. Our work is presented in terms of the limiting conditional mean population size $\mu=c^{-1}$. The analogous quantity for the Markov branching process is denoted by $\widehat\mu$. We show that the simple representation put forward for $\widehat\mu$ in fact is an upper bound that is attained only if the offspring-number probability-generating function is quadratic. The conditional mean $\mu$ is the limit of a computable increasing sequence $(\mu_n$). Estimates of n are determined ensuring that, for any small positive number $\varepsilon$, $0\lt\mu-\mu_n\le \varepsilon$.
Hamiltonian Monte Carlo (HMC) is a very popular collection of Markov chain Monte Carlo (MCMC) algorithms. One explanation for the popularity of HMC algorithms is their excellent performance as the dimension d of the target becomes large: theoretical analyses show that popular versions of HMC can have a running time that scales as well as $d^{0.25}$ in good conditions, while even an optimally tuned random-walk metropolis (RWM) algorithm will not do better than d. In this paper, we investigate a different scaling question: does HMC beat RWM for targets with well-separated modes? We find that the answer is often no. Our main tool for answering this question is a novel and simple formula for the conductance of HMC based on Liouville’s theorem, and we also show how this new formula can be used to give very short proofs of results that seem tedious to show with the usual formula. We also use this result to compute the spectral gap of HMC algorithms, for both the classical HMC with isotropic momentum and the recent Riemannian HMC, for multimodal targets. While we focus on the concrete comparison of RWM and HMC, we expect qualitatively similar conclusions to hold for other gradient-based algorithms.
This work studies time averages of an observable $h(t,X_t)$, where $X_t$ is the solution to a time-inhomogeneous stochastic differential equation (SDE) driven by drift, b(t, x), and diffusion, $\sigma(t{,}{\kern.5pt}x)$, that change sufficiently slowly in time. In this quasistatic regime we derive an approximation to the time average that is computable from properties of the time-homogeneous SDEs driven by $b(t,\cdot)$ and $\sigma(t,\cdot)$ with fixed t; specifically, we utilize $\log$-Sobolev inequalities for the instantaneous invariant distribution and generator for each t. We obtain explicit non-asymptotic error bounds on this quasistatic approximation, both in the form of concentration inequalities and bounds on the expected value. The error bounds demonstrate a competition between the speed of convergence to the instantaneous invariant distributions and their rate of change, matching the intuition that underlies the quasistatic approximation.
We study a family of Crump–Mode–Jagers branching processes in a random environment that explode, i.e. that grow infinitely large in finite time with positive probability. Building on recent work of Iyer and the author (‘On the structure of genealogical trees associated with explosive Crump–Mode–Jagers branching processes’, arXiv:2311.14664, 2023), we weaken certain assumptions required to prove that the branching process, at the time of explosion, contains a (unique) individual with infinite offspring. We then apply these results to super-linear preferential attachment models. In particular, we fill gaps in some of the cases analysed in Appendix A of the work of Iyer and the author and study a large range of previously unattainable cases.
We study the probability that an AR(1) Markov chain $X_{n+1}=aX_n+\xi _{n+1}$, where $a\in (0,1)$ is a constant, stays non-negative for a long time. We find the exact asymptotics of this probability and the weak limit of $X_n$ conditioned to stay non-negative, assuming that the independent and identically distributed innovations $\xi _n$ take only two values $\pm 1$ and $a \le \tfrac 23$. This limiting distribution is quasi-stationary. It has no atoms and is singular with respect to the Lebesgue measure when $\tfrac 12< a \le \tfrac 23$, except for the case when $a=\tfrac 23$ and $\mathbb P(\xi _n=1)=\tfrac 12$, where this distribution is uniform on the interval $[0,3]$. This is similar to the properties of Bernoulli convolutions. For $0 < a \le \tfrac 12$, the situation is much simpler and the limiting distribution is a $\delta $-measure. To prove these results, we uncover a close connection between $X_n$ killed at exiting $[0, \infty )$ and the classical dynamical system defined by the piecewise linear mapping $x \mapsto x/a + 1/2\ \pmod 1$. Namely, the trajectory of this system started at $X_n$ deterministically recovers the values of the killed chain in reversed time. We use this fact to construct a suitable Banach space, where the transition operator of the killed chain has the compactness properties that allow us to apply a conventional argument of the Perron–Frobenius type.
In this paper we propose a new efficient algorithm to compute the value function for zero-sum stopping games featuring two players with opposing interests. This can be seen as a game version of the ‘forward algorithm’ for (one-player) optimal stopping problems, first introduced by Irle (2006) for discrete-time Markov chains and later revisited by Miclo and Villeneuve (2021) for continuous-time Markov processes on general state spaces. This paper focuses on a game driven by a homogeneous continuous-time Markov chain taking values in a finite state space and also discusses the number of iterations needed. Illustrated computational implementations for a few particular examples are also provided.
Fractional Brownian motion, with its long-time correlated increments, has been applied in many fields in recent years. Since volatility was shown to be rough by Gatheral, Jaisson, and Rosenbaum, fractional Brownian motion has gained popularity as a financial model. In this work, we revisit the definitions and properties of the univariate and multivariate fractional Brownian motions, and consider four simulation methods. We demonstrate the issues associated with applying the standard Euler scheme for simulating stochastic processes driven by fractional Brownian motion with $H < \frac{1}{2}$ (which we call the rough models). We then introduce a novel approximate method for simulating such rough models based on the fast algorithm by Ma and Wu, which accounts for a factor of 10 speedup. Finally, we consider applications of these methods to option pricing.
We study large deviations for Cox–Ingersoll–Ross processes with small noise and state-dependent fast switching via associated Hamilton–Jacobi–Bellman equations. As time scales separate, when the noise goes to 0 and the rate of switching goes to $\infty$, we get a limit equation characterized by the averaging principle. Moreover, we prove the large deviation principle with an action-integral form rate function to describe the asymptotic behavior of such systems. The new ingredient is establishing the comparison principle in the singular context. The proof is carried out using the nonlinear semigroup method from Feng and Kurtz’s book [14].
This paper introduces a novel expectation-maximization (EM) algorithm for estimating general phase-type (PH) distributions from left-truncated and right-censored (LTRC) data, a common challenge in survival analysis. The proposed algorithm is highly efficient with computational complexity that scales with the number of nonzero elements in the generator matrix. This feature makes the estimation of high-dimensional, sparse PH models computationally tractable and enables the practical use of the computationally intensive extended information criterion for model selection. Numerical experiments demonstrate its significant speed advantage over a modern benchmark and the applicability of PH models to complex lifetime data.
We consider general discrete-time multitype branching processes on a countable set X. According to these processes, a particle of type $x\in X$ generates a random number of children and chooses their type in X, not necessarily independently nor with the same law for different parent types. We introduce a new type of stochastic ordering of multitype branching processes, generalising the germ order introduced by Hutchcroft, which relies on the generating function of the process. We prove that given two multitype branching processes with laws ${\boldsymbol{\mu}}$ and ${\boldsymbol{\nu}}$ respectively, with ${\boldsymbol{\mu}}\ge{\boldsymbol{\nu}}$, then in every set where there is survival according to ${\boldsymbol{\nu}}$, there is also survival according to ${\boldsymbol{\mu}}$. Moreover, in every set where there is strong survival according to ${\boldsymbol{\nu}}$, there is also strong survival according to ${\boldsymbol{\mu}}$, provided that the supremum of the global extinction probabilities for the ${\boldsymbol{\nu}}$ process, taken over all starting points x, is strictly smaller than 1. New conditions for survival and strong survival for inhomogeneous multitype branching processes are provided. We also extend a result of Moyal which claims that, under some conditions, the global extinction probability for a multitype branching process is the only fixed point of its generating function, whose supremum over all starting coordinates may be smaller than 1.
We identify the size of the largest connected component in a subcritical inhomogeneous random graph with a kernel of preferential attachment type. The component is polynomial in the graph size with an explicitly given exponent, which is strictly larger than the exponent for the largest degree in the graph. This is in stark contrast to the behaviour of inhomogeneous random graphs with a kernel of rank one. Our proof uses local approximation by branching random walks going well beyond the weak local limit and novel results on subcritical killed branching random walks.
In this paper we propose a refracted skew Brownian motion as a risk model with endogenous regime switching, which generalizes the refracted diffusion risk process introduced by Gerber and Shiu. We consider an optimal dividend problem for the refracted skew Brownian risk model and identify sufficient conditions, respectively, for barrier strategy, band strategy, and their variants to be optimal.
We consider a critical bisexual branching process in a random environment generated by independent and identically distributed random variables. Assuming that the process starts with a large number of pairs N, we prove that its extinction time is of order $\ln^2 N$. Interestingly, this result is valid for a general class of mating functions. Among these are the functions describing the monogamous and polygamous behavior of couples, as well as the function reducing the bisexual branching process to the simple one.
In recent works on the theory of machine learning, it has been observed that heavy tail properties of stochastic gradient descent (SGD) can be studied in the probabilistic framework of stochastic recursions. In particular, Gürbüzbalaban et al. (2021) considered a setup corresponding to linear regression for which iterations of SGD can be modelled by a multivariate affine stochastic recursion $X_n=A_nX_{n-1}+B_n$ for independent and identically distributed pairs $(A_n,B_n)$, where $A_n$ is a random symmetric matrix and $B_n$ is a random vector. However, their approach is not completely correct and, in the present paper, the problem is put into the right framework by applying the theory of irreducible-proximal matrices.
We investigate the EM approximation for $\mathbb{R}^d$-valued ergodic stochastic differential equations (SDEs) driven by rotationally invariant $\alpha$-stable processes ($\alpha\in(1,2)$) with Markovian switching. The coefficient g violates the dissipative condition for certain states of the switching process. Using the Lindeberg principle, we establish quantitative error bounds between the original process $(X_t,R_t)_{t\geqslant 0}$ and its Euler–Maruyama (EM) scheme under a specially designed metric. Furthermore, we derive both a central limit theorem and a moderate derivation principle for the empirical measures of both the SDE and its EM scheme. The theoretical results are subsequently validated through a concrete example.
For Markov chains and Markov processes exhibiting a form of stochastic monotonicity (higher states have higher transition probabilities in terms of stochastic dominance), stability and ergodicity results can be obtained with the use of order-theoretic mixing conditions. We complement these results by providing quantitative bounds on deviations between distributions. We also show that well-known total variation bounds can be recovered as a special case.
We prove new results about comparing the efficiency of general state space Markov chain Monte Carlo algorithms that randomly select a possibly different reversible method at each step (previously known only for finite state spaces). We also provide new, simpler, more accessible proofs of key results, and analyse numerous examples. We provide a full proof of the formula for the asymptotic variance for real-valued functionals on $\varphi$-irreducible reversible Markov chains, first introduced by Kipnis and Varadhan (1986, Commun. Math. Phys.104, 1–19). Given two Markov kernels P and Q with stationary measure $\pi$, we say that the Markov kernel P efficiency-dominates the Markov kernel Q if the asymptotic variance with respect to P is at most the asymptotic variance with respect to Q for every real-valued functional $f\in L^2(\pi)$. Assuming only a basic background in functional analysis, we prove that for two reversible Markov kernels P and Q, P efficiency-dominates Q if and only if the operator $\mathcal{Q}-\mathcal{P}$, where $\mathcal{P}$ is the operator on $L^2(\pi)$ that maps $f\mapsto\int f(y)P(\cdot,\mathrm{d}y)$ and similarly for $\mathcal{Q}$, is positive on $L^2(\pi)$, i.e. $\langle f,\left(\mathcal{Q}-\mathcal{P}\right)f\rangle\geq0$ for every $f\in L^2(\pi)$ (previous proofs for general state spaces use technical results from monotone operator function theory). We use this result to show that under mild conditions, sandwich variants of data augmentation algorithms efficiency-dominate the original algorithm. We also provide other easy-to-check sufficient conditions for efficiency dominance, some of which are generalized from the finite state space case. We also provide a proof based on that of Tierney (1998, Ann. Appl. Prob.8, 1–9) that Peskun dominance is a sufficient condition for efficiency dominance for reversible kernels. Using these results, we show that Markov kernels formed by random selection of other ‘component’ Markov kernels will always efficiency-dominate another Markov kernel formed in this way, as long as the component kernels of the former efficiency-dominate those of the latter. These results on the efficiency dominance of combining component kernels generalizes the results on the efficiency dominance of combined chains introduced by Neal and Rosenthal (2024, J. Appl. Prob.62, 188–208) from finite state spaces to general state spaces.
In this paper we study degree-penalized contact processes on Galton-Watson (GW) trees and the configuration model. The model we consider is a modification of the usual contact process on a graph. In particular, each vertex can be either infected or healthy. When infected, each vertex heals at rate one. Also, when infected, a vertex u with degree $d_u$ infects its neighboring vertex v with degree $d_v$ with rate $\lambda / f(d_u, d_v)$ for some positive function f. In the case $f(d_u, d_v)=\max (d_u, d_v)^\mu $ for some $\mu \ge 0$, the infection is slowed down to and from high-degree vertices. This is in line with arguments used in social network science: people with many contacts do not have the time to infect their neighbors at the same rate as people with fewer contacts.
We show that new phase transitions occur in terms of the parameter $\mu $ (at $1/2$) and the degree distribution D of the GW tree.
• When $\mu \ge 1$, the process goes extinct for all distributions D for all sufficiently small $\lambda>0$;
• When $\mu \in [1/2, 1)$, and the tail of D weakly follows a power law with tail-exponent less than $1-\mu $, the process survives globally but not locally for all $\lambda $ small enough;
• When $\mu \in [1/2, 1)$, and $\mathbb {E}[D^{1-\mu }]<\infty $, the process goes extinct almost surely, for all $\lambda $ small enough;
• When $\mu <1/2$, and D is heavier than stretched exponential with stretch-exponent $1-2\mu $, the process survives (locally) with positive probability for all $\lambda>0$.
We also study the product case, where $f(d_u,d_v)=(d_u d_v)^\mu $. In that case, the situation for $\mu < 1/2$ is the same as the one described above, but $\mu \ge 1/2$ always leads to a subcritical contact process for small enough $\lambda>0$ on all graphs. Furthermore, for finite random graphs with prescribed degree sequences, we establish the corresponding phase transitions in terms of the length of survival.
We consider steady-state diffusion in a bounded planar domain with multiple small targets on a smooth boundary. Using the method of matched asymptotic expansions, we investigate the competition of these targets for a diffusing particle and the crucial role of surface reactions on the targets. We start from the classical problem of splitting probabilities for perfectly reactive targets with Dirichlet boundary conditions and improve some earlier results. We discuss how this approach can be generalised to partially reactive targets characterised by a Robin boundary condition. In particular, we show how partial reactivity reduces the effective size of the target. In addition, we consider more intricate surface reactions modelled by mixed Steklov-Neumann or Steklov-Neumann-Dirichlet problems. We provide the first derivation of the asymptotic behaviour of the eigenvalues and eigenfunctions for these spectral problems in the small-target limit. Finally, we show how our asymptotic approach can be extended to interior targets in the bulk and to exterior problems where diffusion occurs in an unbounded planar domain outside a compact set. Direct applications of these results to diffusion-controlled reactions are discussed.
Following the pivotal work of Sevastyanov (1957), who considered branching processes with homogeneous Poisson immigration, much has been done to understand the behaviour of such processes under different types of branching and immigration mechanisms. Recently, the case where the times of immigration are generated by a non-homogeneous Poisson process has been considered in depth. In this work, we demonstrate how we can use the framework of point processes in order to go beyond the Poisson process. As an illustration, we show how to transfer techniques from the case of Poisson immigration to the case where it is spanned by a determinantal point process.