We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In their celebrated paper [CLR10], Caputo, Liggett and Richthammer proved Aldous’ conjecture and showed that for an arbitrary finite graph, the spectral gap of the interchange process is equal to the spectral gap of the underlying random walk. A crucial ingredient in the proof was the Octopus Inequality — a certain inequality of operators in the group ring $\mathbb{R}\left[{\mathrm{Sym}}_{n}\right]$ of the symmetric group. Here we generalise the Octopus Inequality and apply it to generalising the Caputo–Liggett–Richthammer Theorem to certain hypergraphs, proving some cases of a conjecture of Caputo.
The payoff in the Chow–Robbins coin-tossing game is the proportion of heads when you stop. Stopping to maximize expectation was addressed by Chow and Robbins (1965), who proved there exist integers ${k_n}$ such that it is optimal to stop at n tosses when heads minus tails is ${k_n}$. Finding ${k_n}$ was unsolved except for finitely many cases by computer. We prove an $o(n^{-1/4})$ estimate of the stopping boundary of Dvoretsky (1967), which then proves ${k_n} = \left\lceil {\alpha \sqrt n \,\, - 1/2\,\, + \,\,\frac{{\left( { - 2\zeta (\! -1/2)} \right)\sqrt \alpha }}{{\sqrt \pi }}{n^{ - 1/4}}} \right\rceil $ except for n in a set of density asymptotic to 0, at a power law rate. Here, $\alpha$ is the Shepp–Walker constant from the Brownian motion analog, and $\zeta$ is Riemann’s zeta function. An $n^{ - 1/4}$ dependence was conjectured by Christensen and Fischer (2022). Our proof uses moments involving Catalan and Shapiro Catalan triangle numbers which appear in a tree resulting from backward induction, and a generalized backward induction principle. It was motivated by an idea of Häggström and Wästlund (2013) to use backward induction of upper and lower Value bounds from a horizon, which they used numerically to settle a few cases. Christensen and Fischer, with much better bounds, settled many more cases. We use Skorohod’s embedding to get simple upper and lower bounds from the Brownian analog; our upper bound is the one found by Christensen and Fischer in another way. We use them first for yet many more examples and a conjecture, then algebraically in the tree, with feedback to get much sharper Value bounds near the border, and analytic results. Also, we give a formula that gives the exact optimal stop rule for all n up to about a third of a billion; it uses the analytic result plus terms arrived at empirically.
We study continuous-time Markov chains on the nonnegative integers under mild regularity conditions (in particular, the set of jump vectors is finite and both forward and backward jumps are possible). Based on the so-called flux balance equation, we derive an iterative formula for calculating stationary measures. Specifically, a stationary measure $\pi(x)$ evaluated at $x\in\mathbb{N}_0$ is represented as a linear combination of a few generating terms, similarly to the characterization of a stationary measure of a birth–death process, where there is only one generating term, $\pi(0)$. The coefficients of the linear combination are recursively determined in terms of the transition rates of the Markov chain. For the class of Markov chains we consider, there is always at least one stationary measure (up to a scaling constant). We give various results pertaining to uniqueness and nonuniqueness of stationary measures, and show that the dimension of the linear space of signed invariant measures is at most the number of generating terms. A minimization problem is constructed in order to compute stationary measures numerically. Moreover, a heuristic linear approximation scheme is suggested for the same purpose by first approximating the generating terms. The correctness of the linear approximation scheme is justified in some special cases. Furthermore, a decomposition of the state space into different types of states (open and closed irreducible classes, and trapping, escaping and neutral states) is presented. Applications to stochastic reaction networks are well illustrated.
We develop general conditions for weak convergence of adaptive Markov chain Monte Carlo processes and this is shown to imply a weak law of large numbers for bounded Lipschitz continuous functions. This allows an estimation theory for adaptive Markov chain Monte Carlo where previously developed theory in total variation may fail or be difficult to establish. Extensions of weak convergence to general Wasserstein distances are established, along with a weak law of large numbers for possibly unbounded Lipschitz functions. Applications are applied to autoregressive processes in various settings, unadjusted Langevin processes, and adaptive Metropolis–Hastings.
We introduce a financial market model featuring a risky asset whose price follows a sticky geometric Brownian motion and a riskless asset that grows with a constant interest rate $r\in \mathbb R$. We prove that this model satisfies no arbitrage and no free lunch with vanishing risk only when $r=0$. Under this condition, we derive the corresponding arbitrage-free pricing equation, assess the replicability, and give a representation of the replication strategy. We then show that all locally bounded replicable payoffs for the standard Black–Scholes model are also replicable for the sticky model. Last, we evaluate via numerical experiments the impact of hedging in discrete time and of misrepresenting price stickiness.
We consider a superprocess $\{X_t\colon t\geq 0\}$ in a random environment described by a Gaussian field $\{W(t,x)\colon t\geq 0,x\in \mathbb{R}^d\}$. First, we set up a representation of $\mathbb{E}[\langle g, X_t\rangle\mathrm{e}^{-\langle \,f,X_t\rangle }\mid\sigma(W)\vee\sigma(X_r,0\leq r\leq s)]$ for $0\leq s < t$ and some functions f,g, which generalizes the result in Mytnik and Xiong (2007, Theorem 2.15). Next, we give a uniform upper bound for the conditional log-Laplace equation with unbounded initial values. We then use this to establish the corresponding conditional entrance law. Finally, the excursion representation of $\{X_t\colon t\geq 0\}$ is given.
This paper is concerned with the growth rate of susceptible–infectious–recovered epidemics with general infectious period distribution on random intersection graphs. This type of graph is characterised by the presence of cliques (fully connected subgraphs). We study epidemics on random intersection graphs with a mixed Poisson degree distribution and show that in the limit of large population sizes the number of infected individuals grows exponentially during the early phase of the epidemic, as is generally the case for epidemics on asymptotically unclustered networks. The Malthusian parameter is shown to satisfy a variant of the classical Euler–Lotka equation. To obtain these results we construct a coupling of the epidemic process and a continuous-time multitype branching process, where the type of an individual is (essentially) given by the length of its infectious period. Asymptotic results are then obtained via an embedded single-type Crump–Mode–Jagers branching process.
Motivated by recent developments of quasi-stationary Monte Carlo methods, we investigate the stability of quasi-stationary distributions of killed Markov processes under perturbations of the generator. We first consider a general bounded self-adjoint perturbation operator, and then study a particular unbounded perturbation corresponding to truncation of the killing rate. In both scenarios, we quantify the difference between eigenfunctions of the smallest eigenvalue of the perturbed and unperturbed generators in a Hilbert space norm. As a consequence, $\mathcal{L}^1$-norm estimates of the difference of the resulting quasi-stationary distributions in terms of the perturbation are provided.
We show that for $\lambda\in[0,{m_1}/({1+\sqrt{1-{1}/{m_1}}})]$, the biased random walk’s speed on a Galton–Watson tree without leaves is strictly decreasing, where $m_1\geq 2$. Our result extends the monotonic interval of the speed on a Galton–Watson tree.
This paper characterizes irreducible phase-type representations for exponential distributions. Bean and Green (2000) gave a set of necessary and sufficient conditions for a phase-type distribution with an irreducible generator matrix to be exponential. We extend these conditions to irreducible representations, and we thus give a characterization of all irreducible phase-type representations for exponential distributions. We consider the results in relation to time-reversal of phase-type distributions, PH-simplicity, and the algebraic degree of a phase-type distribution, and we give applications of the results. In particular we give the conditions under which a Coxian distribution becomes exponential, and we construct bivariate exponential distributions. Finally, we translate the main findings to the discrete case of geometric distributions.
For a continuous-time phase-type (PH) distribution, starting with its Laplace–Stieltjes transform, we obtain a necessary and sufficient condition for its minimal PH representation to have the same order as its algebraic degree. To facilitate finding this minimal representation, we transform this condition equivalently into a non-convex optimization problem, which can be effectively addressed using an alternating minimization algorithm. The algorithm convergence is also proved. Moreover, the method we develop for the continuous-time PH distributions can be used directly for the discrete-time PH distributions after establishing an equivalence between the minimal representation problems for continuous-time and discrete-time PH distributions.
We establish a number of results concerning the limiting behaviour of the longest edges in the genealogical tree generated by a continuous-time Galton–Watson process. Separately, we consider the large-time behaviour of the longest pendant edges, the longest (strictly) interior edges, and the longest of all the edges. These results extend the special case of long pendant edges of birth–death processes established in Bocharov et al. (2023).
We study the Markov chain Monte Carlo estimator for numerical integration for functions that do not need to be square integrable with respect to the invariant distribution. For chains with a spectral gap we show that the absolute mean error for $L^p$ functions, with $p \in (1,2)$, decreases like $n^{({1}/{p}) -1}$, which is known to be the optimal rate. This improves currently known results where an additional parameter $\delta \gt 0$ appears and the convergence is of order $n^{(({1+\delta})/{p})-1}$.
We consider the hard-core model on a finite square grid graph with stochastic Glauber dynamics parametrized by the inverse temperature $\beta$. We investigate how the transition between its two maximum-occupancy configurations takes place in the low-temperature regime $\beta \to \infty$ in the case of periodic boundary conditions. The hard-core constraints and the grid symmetry make the structure of the critical configurations for this transition, also known as essential saddles, very rich and complex. We provide a comprehensive geometrical characterization of these configurations that together constitute a bottleneck for the Glauber dynamics in the low-temperature limit. In particular, we develop a novel isoperimetric inequality for hard-core configurations with a fixed number of particles and show how the essential saddles are characterized not only by the number of particles but also their geometry.
The study of many population growth models is complicated by only partial observation of the underlying stochastic process driving the model. For example, in an epidemic outbreak we might know when individuals show symptoms to a disease and are removed, but not when individuals are infected. Motivated by the above example and the long-established approximation of epidemic processes by branching processes, we explore the number of individuals alive in a time-inhomogeneous branching process with a general phase-type lifetime distribution given only (partial) information on the times of deaths of individuals. Deaths are detected independently with a detection probability that can vary with time and type. We show that the number of individuals alive immediately after the kth detected death can be expressed as the mixture of random variables each of which consists of the sum of k independent zero-modified geometric distributions. Furthermore, in the case of an Erlang lifetime distribution, we derive an easy-to-compute mixture of negative binomial distributions as an approximation of the number of individuals alive immediately after the kth detected death.
We consider time-inhomogeneous ordinary differential equations (ODEs) whose parameters are governed by an underlying ergodic Markov process. When this underlying process is accelerated by a factor $\varepsilon^{-1}$, an averaging phenomenon occurs and the solution of the ODE converges to a deterministic ODE as $\varepsilon$ vanishes. We are interested in cases where this averaged flow is globally attracted to a point. In that case, the equilibrium distribution of the solution of the ODE converges to a Dirac mass at this point. We prove an asymptotic expansion in terms of $\varepsilon$ for this convergence, with a somewhat explicit formula for the first-order term. The results are applied in three contexts: linear Markov-modulated ODEs, randomized splitting schemes, and Lotka–Volterra models in a random environment. In particular, as a corollary, we prove the existence of two matrices whose convex combinations are all stable but are such that, for a suitable jump rate, the top Lyapunov exponent of a Markov-modulated linear ODE switching between these two matrices is positive.
The basic question in perturbation analysis of Markov chains is: how do small changes in the transition kernels of Markov chains translate to chains in their stationary distributions? Many papers on the subject have shown, roughly, that the change in stationary distribution is small as long as the change in the kernel is much less than some measure of the convergence rate. This result is essentially sharp for generic Markov chains. We show that much larger errors, up to size roughly the square root of the convergence rate, are permissible for many target distributions associated with graphical models. The main motivation for this work comes from computational statistics, where there is often a tradeoff between the per-step error and per-step cost of approximate MCMC algorithms. Our results show that larger perturbations (and thus less-expensive chains) still give results with small error.
The problem of reservation in a large distributed system is analyzed via a new mathematical model. The target application is car-sharing systems. This model is motivated by the large station-based car-sharing system in France called Autolib’. This system can be described as a closed stochastic network where the nodes are the stations and the customers are the cars. The user can reserve a car and a parking space. We study the evolution of the system when the reservation of parking spaces and cars is effective for all users. The asymptotic behavior of the underlying stochastic network is given when the number N of stations and the fleet size M increase at the same rate. The analysis involves a Markov process on a state space with dimension of order $N^2$. It is quite remarkable that the state process describing the evolution of the stations, whose dimension is of order N, converges in distribution, although not Markov, to a non-homogeneous Markov process. We prove this mean-field convergence. We also prove, using combinatorial arguments, that the mean-field limit has a unique equilibrium measure when the time between reserving and picking up the car is sufficiently small. This result extends the case where only the parking space can be reserved.
Let $B^{H}$ be a d-dimensional fractional Brownian motion with Hurst index $H\in(0,1)$, $f\,:\,[0,1]\longrightarrow\mathbb{R}^{d}$ a Borel function, and $E\subset[0,1]$, $F\subset\mathbb{R}^{d}$ are given Borel sets. The focus of this paper is on hitting probabilities of the non-centered Gaussian process $B^{H}+f$. It aims to highlight how each component f, E and F is involved in determining the upper and lower bounds of $\mathbb{P}\{(B^H+f)(E)\cap F\neq \emptyset \}$. When F is a singleton and f is a general measurable drift, some new estimates are obtained for the last probability by means of suitable Hausdorff measure and capacity of the graph $Gr_E(f)$. As application we deal with the issue of polarity of points for $(B^H+f)\vert_E$ (the restriction of $B^H+f$ to the subset $E\subset (0,\infty)$).
The embedding problem of Markov chains examines whether a stochastic matrix$\mathbf{P} $ can arise as the transition matrix from time 0 to time 1 of a continuous-time Markov chain. When the chain is homogeneous, it checks if $ \mathbf{P}=\exp{\mathbf{Q}}$ for a rate matrix $ \mathbf{Q}$ with zero row sums and non-negative off-diagonal elements, called a Markov generator. It is known that a Markov generator may not always exist or be unique. This paper addresses finding $ \mathbf{Q}$, assuming that the process has at most one jump per unit time interval, and focuses on the problem of aligning the conditional one-jump transition matrix from time 0 to time 1 with $ \mathbf{P}$. We derive a formula for this matrix in terms of $ \mathbf{Q}$ and establish that for any $ \mathbf{P}$ with non-zero diagonal entries, a unique $ \mathbf{Q}$, called the ${\unicode{x1D7D9}}$-generator, exists. We compare the ${\unicode{x1D7D9}}$-generator with the one-jump rate matrix from Jarrow, Lando, and Turnbull (1997), showing which is a better approximate Markov generator of $ \mathbf{P}$ in some practical cases.