To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).
We present the first class of perfect sampling (also known as exact simulation) algorithms for the steady-state distribution of non-Markovian loss systems. We use a variation of dominated coupling from the past. We first simulate a stationary infinite server system backwards in time and analyze the running time in heavy traffic. In particular, we are able to simulate stationary renewal marked point processes in unbounded regions. We then use the infinite server system as an upper bound process to simulate the loss system. The running time analysis of our perfect sampling algorithm for loss systems is performed in the quality-driven (QD) and the quality-and-efficiency-driven regimes. In both cases, we show that our algorithm achieves subexponential complexity as both the number of servers and the arrival rate increase. Moreover, in the QD regime, our algorithm achieves a nearly optimal rate of complexity.
The calculation of multivariate normal probabilities is of great importance in many statistical and economic applications. In this paper we propose a spherical Monte Carlo method with both theoretical analysis and numerical simulation. We start by writing the multivariate normal probability via an inner radial integral and an outer spherical integral using the spherical transformation. For the outer spherical integral, we apply an integration rule by randomly rotating a predetermined set of well-located points. To find the desired set, we derive an upper bound for the variance of the Monte Carlo estimator and propose a set which is related to the kissing number problem in sphere packings. For the inner radial integral, we employ the idea of antithetic variates and identify certain conditions so that variance reduction is guaranteed. Extensive Monte Carlo simulations on some probabilities confirm these claims.
Collocation has become a standard tool for approximation of parameterized systems in the uncertainty quantification (UQ) community. Techniques for least-squares regularization, compressive sampling recovery, and interpolatory reconstruction are becoming standard tools used in a variety of applications. Selection of a collocation mesh is frequently a challenge, but methods that construct geometrically unstructured collocation meshes have shown great potential due to attractive theoretical properties and direct, simple generation and implementation. We investigate properties of these meshes, presenting stability and accuracy results that can be used as guides for generating stochastic collocation grids in multiple dimensions.
This paper is concerned with the solution of the optimal stopping problem associated to the value of American options driven by continuous-time Markov chains. The value-function of an American option in this setting is characterised as the unique solution (in a distributional sense) of a system of variational inequalities. Furthermore, with continuous and smooth fit principles not applicable in this discrete state-space setting, a novel explicit characterisation is provided of the optimal stopping boundary in terms of the generator of the underlying Markov chain. Subsequently, an algorithm is presented for the valuation of American options under Markov chain models. By application to a suitably chosen sequence of Markov chains, the algorithm provides an approximate valuation of an American option under a class of Markov models that includes diffusion models, exponential Lévy models, and stochastic differential equations driven by Lévy processes. Numerical experiments for a range of different models suggest that the approximation algorithm is flexible and accurate. A proof of convergence is also provided.
We consider the stochastic Allen-Cahn equation perturbed by smooth additive Gaussian noise in a spatial domain with smooth boundary in dimension d ≤ 3, and study the semidiscretization in time of the equation by an implicit Euler method. We show that the method converges pathwise with a rate O(Δtγ) for any γ < ½. We also prove that the scheme converges uniformly in the strong Lp-sense but with no rate given.
Consider the problem of drawing random variates (X1, …, Xn) from a distribution where the marginal of each Xi is specified, as well as the correlation between every pair Xi and Xj. For given marginals, the Fréchet-Hoeffding bounds put a lower and upper bound on the correlation between Xi and Xj. Any achievable correlation between Xi and Xjis a convex combination of these bounds. We call the value λ(Xi, Xj) ∈ [0, 1] of this convex combination the convexity parameter of (Xi, Xj) with λ(Xi, Xj) = 1 corresponding to the upper bound and maximal correlation. For given marginal distributions functions F1, …, Fn of (X1, …, Xn), we show that λ(Xi, Xj) = λijif and only if there exist symmetric Bernoulli random variables (B1, …, Bn) (that is {0, 1} random variables with mean ½) such that λ(Bi, Bj) = λij. In addition, we characterize completely the set of convexity parameters for symmetric Bernoulli marginals in two, three, and four dimensions.
The multi-level Monte Carlo method proposed by Giles (2008) approximates the expectation of some functionals applied to a stochastic process with optimal order of convergence for the mean-square error. In this paper a modified multi-level Monte Carlo estimator is proposed with significantly reduced computational costs. As the main result, it is proved that the modified estimator reduces the computational costs asymptotically by a factor (p / α)2 if weak approximation methods of orders α and p are applied in the case of computational costs growing with the same order as the variances decay.
In this paper we propose the asymptotic error distributions of the Euler scheme for a stochastic differential equation driven by Itô semimartingales. Jacod (2004) studied this problem for stochastic differential equations driven by pure jump Lévy processes and obtained quite sharp results. We extend his results to a more general pure jump Itô semimartingale.
In this paper we apply the recently established Wiener-Hopf Monte Carlo simulation technique for Lévy processes from Kuznetsov et al. (2011) to path functionals; in particular, first passage times, overshoots, undershoots, and the last maximum before the passage time. Such functionals have many applications, for instance, in finance (the pricing of exotic options in a Lévy model) and insurance (ruin time, debt at ruin, and related quantities for a Lévy insurance risk process). The technique works for any Lévy process whose running infimum and supremum evaluated at an independent exponential time can be sampled from. This includes classic examples such as stable processes, subclasses of spectrally one-sided Lévy processes, and large new families such as meromorphic Lévy processes. Finally, we present some examples. A particular aspect that is illustrated is that the Wiener-Hopf Monte Carlo simulation technique (provided that it applies) performs much better at approximating first passage times than a ‘plain’ Monte Carlo simulation technique based on sampling increments of the Lévy process.
For a collection of objects such as socks, which can be matched according to a characteristic such as color, we study the innocent phrase ‘the distribution of the color of a matching pair’ by looking at two methods for selecting socks. One method is memoryless and effectively samples socks with replacement, while the other samples socks sequentially, with memory, until the same color has been seen twice. We prove that these two methods yield the same distribution on colors if and only if the initial distribution of colors is a uniform distribution. We conjecture a nontrivial maximum value for the total variation distance of these distributions in all other cases.
A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is notsatisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral, which extends the original approach of Bru (1991). We compare our methodology with the alternative results given by the variation-of-constants method, the linearization of the matrix Riccati ordinary differential equation, and the Runge-Kutta algorithm. The new formula turns out to be fast and accurate.
In this paper we discuss an exponential integrator scheme, based on spatial discretization and time discretization, for a class of stochastic partial differential equations. We show that the scheme has a unique stationary distribution whenever the step size is sufficiently small, and that the weak limit of the stationary distribution of the scheme as the step size tends to 0 is in fact the stationary distribution of the corresponding stochastic partial differential equations.
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis-Hastings updates, resulting in a conditional Metropolis-Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH sampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.
Exact simulation approaches for a class of diffusion bridges have recently been proposed based on rejection sampling techniques. The existing rejection sampling methods may not be practical owing to small acceptance probabilities. In this paper we propose an adaptive approach that improves the existing methods significantly under certain scenarios. The idea of the new method is based on a layered process, which can be simulated from a layered Brownian motion with reweighted layer probabilities. We will show that the new exact simulation method is more efficient than existing methods theoretically and via simulation.