We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consider the problem of drawing random variates (X1, …, Xn) from a distribution where the marginal of each Xi is specified, as well as the correlation between every pair Xi and Xj. For given marginals, the Fréchet-Hoeffding bounds put a lower and upper bound on the correlation between Xi and Xj. Any achievable correlation between Xi and Xj
is a convex combination of these bounds. We call the value λ(Xi, Xj) ∈ [0, 1] of this convex combination the convexity parameter of (Xi, Xj) with λ(Xi, Xj) = 1 corresponding to the upper bound and maximal correlation. For given marginal distributions functions F1, …, Fn of (X1, …, Xn), we show that λ(Xi, Xj) = λij
if and only if there exist symmetric Bernoulli random variables (B1, …, Bn) (that is {0, 1} random variables with mean ½) such that λ(Bi, Bj) = λij. In addition, we characterize completely the set of convexity parameters for symmetric Bernoulli marginals in two, three, and four dimensions.
The multi-level Monte Carlo method proposed by Giles (2008) approximates the expectation of some functionals applied to a stochastic process with optimal order of convergence for the mean-square error. In this paper a modified multi-level Monte Carlo estimator is proposed with significantly reduced computational costs. As the main result, it is proved that the modified estimator reduces the computational costs asymptotically by a factor (p / α)2 if weak approximation methods of orders α and p are applied in the case of computational costs growing with the same order as the variances decay.
In this paper we propose the asymptotic error distributions of the Euler scheme for a stochastic differential equation driven by Itô semimartingales. Jacod (2004) studied this problem for stochastic differential equations driven by pure jump Lévy processes and obtained quite sharp results. We extend his results to a more general pure jump Itô semimartingale.
In this paper we apply the recently established Wiener-Hopf Monte Carlo simulation technique for Lévy processes from Kuznetsov et al. (2011) to path functionals; in particular, first passage times, overshoots, undershoots, and the last maximum before the passage time. Such functionals have many applications, for instance, in finance (the pricing of exotic options in a Lévy model) and insurance (ruin time, debt at ruin, and related quantities for a Lévy insurance risk process). The technique works for any Lévy process whose running infimum and supremum evaluated at an independent exponential time can be sampled from. This includes classic examples such as stable processes, subclasses of spectrally one-sided Lévy processes, and large new families such as meromorphic Lévy processes. Finally, we present some examples. A particular aspect that is illustrated is that the Wiener-Hopf Monte Carlo simulation technique (provided that it applies) performs much better at approximating first passage times than a ‘plain’ Monte Carlo simulation technique based on sampling increments of the Lévy process.
For a collection of objects such as socks, which can be matched according to a characteristic such as color, we study the innocent phrase ‘the distribution of the color of a matching pair’ by looking at two methods for selecting socks. One method is memoryless and effectively samples socks with replacement, while the other samples socks sequentially, with memory, until the same color has been seen twice. We prove that these two methods yield the same distribution on colors if and only if the initial distribution of colors is a uniform distribution. We conjecture a nontrivial maximum value for the total variation distance of these distributions in all other cases.
A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not
satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral, which extends the original approach of Bru (1991). We compare our methodology with the alternative results given by the variation-of-constants method, the linearization of the matrix Riccati ordinary differential equation, and the Runge-Kutta algorithm. The new formula turns out to be fast and accurate.
In this paper we discuss an exponential integrator scheme, based on spatial discretization and time discretization, for a class of stochastic partial differential equations. We show that the scheme has a unique stationary distribution whenever the step size is sufficiently small, and that the weak limit of the stationary distribution of the scheme as the step size tends to 0 is in fact the stationary distribution of the corresponding stochastic partial differential equations.
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis-Hastings updates, resulting in a conditional Metropolis-Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH sampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.
Exact simulation approaches for a class of diffusion bridges have recently been proposed based on rejection sampling techniques. The existing rejection sampling methods may not be practical owing to small acceptance probabilities. In this paper we propose an adaptive approach that improves the existing methods significantly under certain scenarios. The idea of the new method is based on a layered process, which can be simulated from a layered Brownian motion with reweighted layer probabilities. We will show that the new exact simulation method is more efficient than existing methods theoretically and via simulation.
In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd2).
We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed (i.i.d.), light-tailed, and nonlattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queueing and financial credit risk modeling. It has been extensively studied in the literature where state-independent, exponential-twisting-based importance sampling has been shown to be asymptotically efficient and a more nuanced state-dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point-based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Furthermore, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to develop an asymptotically vanishing relative error estimator for the practically important expected overshoot of sums of i.i.d. random variables.
Consider a circle with perimeter N > 1 on which k < N segments of length 1 are sampled in an independent and identically distributed manner. In this paper we study the probability π (k,N) that these k segments do not overlap; the density φ(·) of the position of the disks on the circle is arbitrary (that is, it is not necessarily assumed uniform). Two scaling regimes are considered. In the first we set k≡ a√N, and it turns out that the probability of interest converges (N→ ∞) to an explicitly given positive constant that reflects the impact of the density φ(·). In the other regime k scales as aN, and the nonoverlap probability decays essentially exponentially; we give the associated decay rate as the solution to a variational problem. Several additional ramifications are presented.
In the original article [LMS J. Comput. Math. 15 (2012) 71–83], the authors use a discrete form of the Itô formula, developed by Appleby, Berkolaiko and Rodkina [Stochastics 81 (2009) no. 2, 99–127], to show that the almost sure asymptotic stability of a particular two-dimensional test system is preserved when the discretisation step size is small. In this Corrigendum, we identify an implicit assumption in the original proof of the discrete Itô formula that, left unaddressed, would preclude its application to the test system of interest. We resolve this problem by reproving the relevant part of the discrete Itô formula in such a way that confirms its applicability to our test equation. Thus, we reaffirm the main results and conclusions of the original article.
The Asmussen–Kroese Monte Carlo estimators of P(Sn > u) and P(SN > u) are known to work well in rare event settings, where SN is the sum of independent, identically distributed heavy-tailed random variables X1,…,XN and N is a nonnegative, integer-valued random variable independent of the Xi. In this paper we show how to improve the Asmussen–Kroese estimators of both probabilities when the Xi are nonnegative. We also apply our ideas to estimate the quantity E[(SN-u)+].
We consider the question of an optimal transaction between two investors to minimize their risks. We define a dynamic entropic risk measure using backward stochastic differential equations related to a continuous-time single jump process. The inf-convolution of dynamic entropic risk measures is a key transformation in solving the optimization problem.