We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We focus on the population dynamics driven by two classes of truncated $\alpha$-stable processes with Markovian switching. Almost necessary and sufficient conditions for the ergodicity of the proposed models are provided. Also, these results illustrate the impact on ergodicity and extinct conditions as the parameter $\alpha$ tends to 2.
We develop a continuous-time Markov chain (CTMC) approximation of one-dimensional diffusions with sticky boundary or interior points. Approximate solutions to the action of the Feynman–Kac operator associated with a sticky diffusion and first passage probabilities are obtained using matrix exponentials. We show how to compute matrix exponentials efficiently and prove that a carefully designed scheme achieves second-order convergence. We also propose a scheme based on CTMC approximation for the simulation of sticky diffusions, for which the Euler scheme may completely fail. The efficiency of our method and its advantages over alternative approaches are illustrated in the context of bond pricing in a sticky short-rate model for a low-interest environment and option pricing under a geometric Brownian motion price model with a sticky interior point.
We analyze average-based distributed algorithms relying on simple and pairwise random interactions among a large and unknown number of anonymous agents. This allows the characterization of global properties emerging from these local interactions. Agents start with an initial integer value, and at each interaction keep the average integer part of both values as their new value. The convergence occurs when, with high probability, all the agents possess the same value, which means that they all know a property of the global system. Using a well-chosen stochastic coupling, we improve upon existing results by providing explicit and tight bounds on the convergence time. We apply these general results to both the proportion problem and the system size problem.
We investigate the impact of Knightian uncertainty on the optimal timing policy of an ambiguity-averse decision-maker in the case where the underlying factor dynamics follow a multidimensional Brownian motion and the exercise payoff depends on either a linear combination of the factors or the radial part of the driving factor dynamics. We present a general characterization of the value of the optimal timing policy and the worst-case measure in terms of a family of explicitly identified excessive functions generating an appropriate class of supermartingales. In line with previous findings based on linear diffusions, we find that ambiguity accelerates timing in comparison with the unambiguous setting. Somewhat surprisingly, we find that ambiguity may lead to stationarity in models which typically do not possess stationary behavior. In this way, our results indicate that ambiguity may act as a stabilizing mechanism.
For a determinantal point process (DPP) X with a kernel K whose spectrum is strictly less than one, André Goldman has established a coupling to its reduced Palm process $X^u$ at a point u with $K(u,u)>0$ so that, almost surely, $X^u$ is obtained by removing a finite number of points from X. We sharpen this result, assuming weaker conditions and establishing that $X^u$ can be obtained by removing at most one point from X, where we specify the distribution of the difference $\xi_u: = X\setminus X^u$. This is used to discuss the degree of repulsiveness in DPPs in terms of $\xi_u$, including Ginibre point processes and other specific parametric models for DPPs.
We consider a model of a stationary population with random size given by a continuous-state branching process with immigration with a quadratic branching mechanism. We give an exact elementary simulation procedure for the genealogical tree of n individuals randomly chosen among the extant population at a given time. Then we prove the convergence of the renormalized total length of this genealogical tree as n goes to infinity; see also Pfaffelhuber, Wakolbinger and Weisshaupt (2011) in the context of a constant-size population. The limit appears already in Bi and Delmas (2016) but with a different approximation of the full genealogical tree. The proof is based on the ancestral process of the extant population at a fixed time, which was defined by Aldous and Popovic (2005) in the critical case.
We apply the power-of-two-choices paradigm to a random walk on a graph: rather than moving to a uniform random neighbour at each step, a controller is allowed to choose from two independent uniform random neighbours. We prove that this allows the controller to significantly accelerate the hitting and cover times in several natural graph classes. In particular, we show that the cover time becomes linear in the number n of vertices on discrete tori and bounded degree trees, of order $${\mathcal O}(n\log \log n)$$ on bounded degree expanders, and of order $${\mathcal O}(n{(\log \log n)^2})$$ on the Erdős–Rényi random graph in a certain sparsely connected regime. We also consider the algorithmic question of computing an optimal strategy and prove a dichotomy in efficiency between computing strategies for hitting and cover times.
We consider random walks on the group of orientation-preserving homeomorphisms of the real line ${\mathbb R}$. In particular, the fundamental question of uniqueness of an invariant measure of the generated process is raised. This problem was studied by Choquet and Deny [Sur l’équation de convolution $\mu = \mu * \sigma $. C. R. Acad. Sci. Paris250 (1960), 799–801] in the context of random walks generated by translations of the line. Nowadays the answer is quite well understood in general settings of strongly contractive systems. Here we focus on a broader class of systems satisfying the conditions of recurrence, contraction and unbounded action. We prove that under these conditions the random process possesses a unique invariant Radon measure on ${\mathbb R}$. Our work can be viewed as following on from Babillot et al [The random difference equation $X_n=A_n X_{n-1}+B_n$ in the critical case. Ann. Probab.25(1) (1997), 478–493] and Deroin et al [Symmetric random walk on $\mathrm {HOMEO}^{+}(\mathbb {R})$. Ann. Probab.41(3B) (2013), 2066–2089].
We derive two-sided bounds for the Newton and Poisson kernels of the W-invariant Dunkl Laplacian in the geometric complex case when the multiplicity $k(\alpha )=1$ i.e., for flat complex symmetric spaces. For the invariant Dunkl–Poisson kernel $P^{W}(x,y)$, the estimates are
where the $\alpha $’s are the positive roots of a root system acting in $\mathbf {R}^{d}$, the $\sigma _{\alpha }$’s are the corresponding symmetries and $P^{\mathbf {R}^{d}}$ is the classical Poisson kernel in ${\mathbf {R}^{d}}$. Analogous bounds are proven for the Newton kernel when $d\ge 3$.
The same estimates are derived in the rank one direct product case $\mathbb {Z}_{2}^{N}$ and conjectured for general W-invariant Dunkl processes.
As an application, we get a two-sided bound for the Poisson and Newton kernels of the classical Dyson Brownian motion and of the Brownian motions in any Weyl chamber.
We present a polynomial-time Markov chain Monte Carlo algorithm for estimating the partition function of the antiferromagnetic Ising model on any line graph. The analysis of the algorithm exploits the ‘winding’ technology devised by McQuillan [CoRR abs/1301.2880 (2013)] and developed by Huang, Lu and Zhang [Proc. 27th Symp. on Disc. Algorithms (SODA16), 514–527]. We show that exact computation of the partition function is #P-hard, even for line graphs, indicating that an approximation algorithm is the best that can be expected. We also show that Glauber dynamics for the Ising model is rapidly mixing on line graphs, an example being the kagome lattice.
Self-exciting point processes have been proposed as models for the location of criminal events in space and time. Here we consider the case where the triggering function is isotropic and takes a non-parametric form that is determined from data. We pay special attention to normalisation issues and to the choice of spatial distance measure, thereby extending the current methodology. After validating these ideas on synthetic data, we perform inference and prediction tests on public domain burglary data from Chicago. We show that the algorithmic advances that we propose lead to improved predictive accuracy.
We explore the tree limits recently defined by Elek and Tardos. In particular, we find tree limits for many classes of random trees. We give general theorems for three classes of conditional Galton–Watson trees and simply generated trees, for split trees and generalized split trees (as defined here), and for trees defined by a continuous-time branching process. These general results include, for example, random labelled trees, ordered trees, random recursive trees, preferential attachment trees, and binary search trees.
We present closed-form solutions to some discounted optimal stopping problems for the running maximum of a geometric Brownian motion with payoffs switching according to the dynamics of a continuous-time Markov chain with two states. The proof is based on the reduction of the original problems to the equivalent free-boundary problems and the solution of the latter problems by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are determined as the maximal solutions of the associated two-dimensional systems of first-order nonlinear ordinary differential equations. The obtained results are related to the valuation of real switching lookback options with fixed and floating sunk costs in the Black–Merton–Scholes model.
In this paper, we introduce a family of processes with values on the nonnegative integers that describes the dynamics of populations where individuals are allowed to have different types of interactions. The types of interactions that we consider include pairwise interactions, such as competition, annihilation, and cooperation; and interactions among several individuals that can be viewed as catastrophes. We call such families of processes branching processes with interactions. Our aim is to study their long-term behaviour under a specific regime of the pairwise interaction parameters that we introduce as the subcritical cooperative regime. Under such a regime, we prove that a process in this class comes down from infinity and has a moment dual which turns out to be a jump-diffusion that can be thought as the evolution of the frequency of a trait or phenotype, and whose parameters have a classical interpretation in terms of population genetics. The moment dual is an important tool for characterizing the stationary distribution of branching processes with interactions whenever such a distribution exists; it is also an interesting object in its own right.
We present Lyapunov-type conditions for non-strong ergodicity of Markov processes. Some concrete models are discussed, including diffusion processes on Riemannian manifolds and Ornstein–Uhlenbeck processes driven by symmetric $\alpha$-stable processes. In particular, we show that any process of d-dimensional Ornstein–Uhlenbeck type driven by $\alpha$-stable noise is not strongly ergodic for every $\alpha\in (0,2]$.
Comparison results for Markov processes with respect to function-class-induced (integral) stochastic orders have a long history. The most general results so far for this problem have been obtained based on the theory of evolution systems on Banach spaces. In this paper we transfer the martingale comparison method, known for the comparison of semimartingales to Markovian semimartingales, to general Markov processes. The basic step of this martingale approach is the derivation of the supermartingale property of the linking process, giving a link between the processes to be compared. This property is achieved using the characterization of Markov processes by the associated martingale problem in an essential way. As a result, the martingale comparison method gives a comparison result for Markov processes under a general alternative but related set of regularity conditions compared to the evolution system approach.
A family $\{Q_{\beta}\}_{\beta \geq 0}$ of Markov chains is said to exhibit metastable mixing with modes$S_{\beta}^{(1)},\ldots,S_{\beta}^{(k)}$ if its spectral gap (or some other mixing property) is very close to the worst conductance $\min\!\big(\Phi_{\beta}\big(S_{\beta}^{(1)}\big), \ldots, \Phi_{\beta}\big(S_{\beta}^{(k)}\big)\big)$ of its modes for all large values of $\beta$. We give simple sufficient conditions for a family of Markov chains to exhibit metastability in this sense, and verify that these conditions hold for a prototypical Metropolis–Hastings chain targeting a mixture distribution. The existing metastability literature is large, and our present work is aimed at filling the following small gap: finding sufficient conditions for metastability that are easy to verify for typical examples from statistics using well-studied methods, while at the same time giving an asymptotically exact formula for the spectral gap (rather than a bound that can be very far from sharp). Our bounds from this paper are used in a companion paper (O. Mangoubi, N. S. Pillai, and A. Smith, arXiv:1808.03230) to compare the mixing times of the Hamiltonian Monte Carlo algorithm and a random walk algorithm for multimodal target distributions.
We consider a strictly substochastic matrix or a stochastic matrix with absorbing states. By using quasi-stationary distributions we show that there is an associated canonical Markov chain that is built from the resurrected chain, the absorbing states, and the hitting times, together with a random walk on the absorbing states, which is necessary for achieving time stationarity. Based upon the 2-stringing representation of the resurrected chain, we supply a stationary representation of the killed and the absorbed chains. The entropies of these representations have a clear meaning when one identifies the probability measure of natural factors. The balance between the entropies of these representations and the entropy of the canonical chain serves to check the correctness of the whole construction.
We study an ergodic singular control problem with constraint of a regular one-dimensional linear diffusion. The constraint allows the agent to control the diffusion only at the jump times of an independent Poisson process. Under relatively weak assumptions, we characterize the optimal solution as an impulse-type control policy, where it is optimal to exert the exact amount of control needed to push the process to a unique threshold. Moreover, we discuss the connection of the present problem to ergodic singular control problems, and illustrate the results with different well-known cost and diffusion structures.
We are interested in the property of coming down from infinity of continuous-state branching processes with competition in a Lévy environment. We first study the event of extinction for such a family of processes under Grey’s condition. Moreover, if we add an integrability condition on the competition mechanism then the process comes down from infinity regardless of the long-time behaviour of the environment.