To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Coupling-from-the-past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes consisting of a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned so that spheres do not overlap with each other. We propose an alternative importance-sampling-based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with that of a naive rejection algorithm and of popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.
There are two types of tempered stable (TS) based Ornstein–Uhlenbeck (OU) processes: (i) the OU-TS process, the OU process driven by a TS subordinator, and (ii) the TS-OU process, the OU process with TS marginal law. They have various applications in financial engineering and econometrics. In the literature, only the second type under the stationary assumption has an exact simulation algorithm. In this paper we develop a unified approach to exactly simulate both types without the stationary assumption. It is mainly based on the distributional decomposition of stochastic processes with the aid of an acceptance–rejection scheme. As the inverse Gaussian distribution is an important special case of TS distribution, we also provide tailored algorithms for the corresponding OU processes. Numerical experiments and tests are reported to demonstrate the accuracy and effectiveness of our algorithms, and some further extensions are also discussed.
We develop a continuous-time Markov chain (CTMC) approximation of one-dimensional diffusions with sticky boundary or interior points. Approximate solutions to the action of the Feynman–Kac operator associated with a sticky diffusion and first passage probabilities are obtained using matrix exponentials. We show how to compute matrix exponentials efficiently and prove that a carefully designed scheme achieves second-order convergence. We also propose a scheme based on CTMC approximation for the simulation of sticky diffusions, for which the Euler scheme may completely fail. The efficiency of our method and its advantages over alternative approaches are illustrated in the context of bond pricing in a sticky short-rate model for a low-interest environment and option pricing under a geometric Brownian motion price model with a sticky interior point.
Nowadays many financial derivatives, such as American or Bermudan options, are of early exercise type. Often the pricing of early exercise options gives rise to high-dimensional optimal stopping problems, since the dimension corresponds to the number of underlying assets. High-dimensional optimal stopping problems are, however, notoriously difficult to solve due to the well-known curse of dimensionality. In this work, we propose an algorithm for solving such problems, which is based on deep learning and computes, in the context of early exercise option pricing, both approximations of an optimal exercise strategy and the price of the considered option. The proposed algorithm can also be applied to optimal stopping problems that arise in other areas where the underlying stochastic process can be efficiently simulated. We present numerical results for a large number of example problems, which include the pricing of many high-dimensional American and Bermudan options, such as Bermudan max-call options in up to 5000 dimensions. Most of the obtained results are compared to reference values computed by exploiting the specific problem design or, where available, to reference values from the literature. These numerical results suggest that the proposed algorithm is highly effective in the case of many underlyings, in terms of both accuracy and speed.
Self-exciting point processes have been proposed as models for the location of criminal events in space and time. Here we consider the case where the triggering function is isotropic and takes a non-parametric form that is determined from data. We pay special attention to normalisation issues and to the choice of spatial distance measure, thereby extending the current methodology. After validating these ideas on synthetic data, we perform inference and prediction tests on public domain burglary data from Chicago. We show that the algorithmic advances that we propose lead to improved predictive accuracy.
A family $\{Q_{\beta}\}_{\beta \geq 0}$ of Markov chains is said to exhibit metastable mixing with modes$S_{\beta}^{(1)},\ldots,S_{\beta}^{(k)}$ if its spectral gap (or some other mixing property) is very close to the worst conductance $\min\!\big(\Phi_{\beta}\big(S_{\beta}^{(1)}\big), \ldots, \Phi_{\beta}\big(S_{\beta}^{(k)}\big)\big)$ of its modes for all large values of $\beta$. We give simple sufficient conditions for a family of Markov chains to exhibit metastability in this sense, and verify that these conditions hold for a prototypical Metropolis–Hastings chain targeting a mixture distribution. The existing metastability literature is large, and our present work is aimed at filling the following small gap: finding sufficient conditions for metastability that are easy to verify for typical examples from statistics using well-studied methods, while at the same time giving an asymptotically exact formula for the spectral gap (rather than a bound that can be very far from sharp). Our bounds from this paper are used in a companion paper (O. Mangoubi, N. S. Pillai, and A. Smith, arXiv:1808.03230) to compare the mixing times of the Hamiltonian Monte Carlo algorithm and a random walk algorithm for multimodal target distributions.
We provide the first generic exact simulation algorithm for multivariate diffusions. Current exact sampling algorithms for diffusions require the existence of a transformation which can be used to reduce the sampling problem to the case of a constant diffusion matrix and a drift which is the gradient of some function. Such a transformation, called the Lamperti transformation, can be applied in general only in one dimension. So, completely different ideas are required for the exact sampling of generic multivariate diffusions. The development of these ideas is the main contribution of this paper. Our strategy combines techniques borrowed from the theory of rough paths, on the one hand, and multilevel Monte Carlo on the other.
In this article we prove new central limit theorems (CLTs) for several coupled particle filters (CPFs). CPFs are used for the sequential estimation of the difference of expectations with respect to filters which are in some sense close. Examples include the estimation of the filtering distribution associated to different parameters (finite difference estimation) and filters associated to partially observed discretized diffusion processes (PODDP) and the implementation of the multilevel Monte Carlo (MLMC) identity. We develop new theory for CPFs, and based upon several results, we propose a new CPF which approximates the maximal coupling (MCPF) of a pair of predictor distributions. In the context of ML estimation associated to PODDP with time-discretization $\Delta_l=2^{-l}$, $l\in\{0,1,\dots\}$, we show that the MCPF and the approach of Jasra, Ballesio, et al. (2018) have, under certain assumptions, an asymptotic variance that is bounded above by an expression that is of (almost) the order of $\Delta_l$ ($\mathcal{O}(\Delta_l)$), uniformly in time. The $\mathcal{O}(\Delta_l)$ bound preserves the so-called forward rate of the diffusion in some scenarios, which is not the case for the CPF in Jasra et al. (2017).
Both sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an $\mathbb{L}_r$-inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
In this paper, a reflected stochastic differential equation (SDE) with jumps is studied for the case where the constraint acts on the law of the solution rather than on its paths. These reflected SDEs have been approximated by Briand et al. (2016) using a numerical scheme based on particles systems, when no jumps occur. The main contribution of this paper is to prove the existence and the uniqueness of the solutions to this kind of reflected SDE with jumps and to generalize the results obtained by Briand et al. (2016) to this context.
In the first part of this paper we study approximations of trajectories of piecewise deterministic processes (PDPs) when the flow is not given explicitly by the thinning method. We also establish a strong error estimate for PDPs as well as a weak error expansion for piecewise deterministic Markov processes (PDMPs). These estimates are the building blocks of the multilevel Monte Carlo (MLMC) method, which we study in the second part. The coupling required by the MLMC is based on the thinning procedure. In the third part we apply these results to a two-dimensional Morris–Lecar model with stochastic ion channels. In the range of our simulations the MLMC estimator outperforms classical Monte Carlo.
Suppose X is a multidimensional diffusion process. Assume that at time zero the state of X is fully observed, but at time $T>0$ only linear combinations of its components are observed. That is, one only observes the vector $L X_T$ for a given matrix L. In this paper we show how samples from the conditioned process can be generated. The main contribution of this paper is to prove that guided proposals, introduced in [35], can be used in a unified way for both uniformly elliptic and hypo-elliptic diffusions, even when L is not the identity matrix. This is illustrated by excellent performance in two challenging cases: a partially observed twice-integrated diffusion with multiple wells and the partially observed FitzHugh–Nagumo model.
It is well known that Monte Carlo integration with variance reduction by means of control variates can be implemented by the ordinary least squares estimator for the intercept in a multiple linear regression model. A central limit theorem is established for the integration error if the number of control variates tends to infinity. The integration error is scaled by the standard deviation of the error term in the regression model. If the linear span of the control variates is dense in a function space that contains the integrand, the integration error tends to zero at a rate which is faster than the square root of the number of Monte Carlo replicates. Depending on the situation, increasing the number of control variates may or may not be computationally more efficient than increasing the Monte Carlo sample size.
We exhibit an exact simulation algorithm for the supremum of a stable process over a finite time interval using dominated coupling from the past (DCFTP). We establish a novel perpetuity equation for the supremum (via the representation of the concave majorants of Lévy processes [27]) and use it to construct a Markov chain in the DCFTP algorithm. We prove that the number of steps taken backwards in time before the coalescence is detected is finite. We analyse the performance of the algorithm numerically (the code, written in Julia 1.0, is available on GitHub).
Using a result of Blanchet and Wallwater (2015) for exactly simulating the maximum of a negative drift random walk queue endowed with independent and identically distributed (i.i.d.) increments, we extend it to a multi-dimensional setting and then we give a new algorithm for simulating exactly the stationary distribution of a first-in–first-out (FIFO) multi-server queue in which the arrival process is a general renewal process and the service times are i.i.d.: the FIFO GI/GI/c queue with $ 2 \leq c \lt \infty$ . Our method utilizes dominated coupling from the past (DCFP) as well as the random assignment (RA) discipline, and complements the earlier work in which Poisson arrivals were assumed, such as the recent work of Connor and Kendall (2015). We also consider the models in continuous time, and show that with mild further assumptions, the exact simulation of those stationary distributions can also be achieved. We also give, using our FIFO algorithm, a new exact simulation algorithm for the stationary distribution of the infinite server case, the GI/GI/$\infty$ model. Finally, we even show how to handle fork–join queues, in which each arriving customer brings c jobs, one for each server.
In this paper, we introduce a new large family of Lévy-driven point processes with (and without) contagion, by generalising the classical self-exciting Hawkes process and doubly stochastic Poisson processes with non-Gaussian Lévy-driven Ornstein–Uhlenbeck-type intensities. The resulting framework may possess many desirable features such as skewness, leptokurtosis, mean-reverting dynamics, and more importantly, the ‘contagion’ or feedback effects, which could be very useful for modelling event arrivals in finance, economics, insurance, and many other fields. We characterise the distributional properties of this new class of point processes and develop an efficient sampling method for generating sample paths exactly. Our simulation scheme is mainly based on the distributional decomposition of the point process and its intensity process. Extensive numerical implementations and tests are reported to demonstrate the accuracy and effectiveness of our scheme. Moreover, we use portfolio risk management as an example to show the applicability and flexibility of our algorithms.
We study a Markovian agent-based model (MABM) in this paper. Each agent is endowed with a local state that changes over time as the agent interacts with its neighbours. The neighbourhood structure is given by a graph. Recently, Simon, Taylor, and Kiss [40] used the automorphisms of the underlying graph to generate a lumpable partition of the joint state space, ensuring Markovianness of the lumped process for binary dynamics. However, many large random graphs tend to become asymmetric, rendering the automorphism-based lumping approach ineffective as a tool of model reduction. In order to mitigate this problem, we propose a lumping method based on a notion of local symmetry, which compares only local neighbourhoods of vertices. Since local symmetry only ensures approximate lumpability, we quantify the approximation error by means of the Kullback–Leibler divergence rate between the original Markov chain and a lifted Markov chain. We prove the approximation error decreases monotonically. The connections to fibrations of graphs are also discussed.
It is well known that traditional Markov chain Monte Carlo (MCMC) methods can fail to effectively explore the state space for multimodal problems. Parallel tempering is a well-established population approach for such target distributions involving a collection of particles indexed by temperature. However, this method can suffer dramatically from the curse of dimensionality. In this paper we introduce an improvement on parallel tempering called QuanTA. A comprehensive theoretical analysis quantifying the improved efficiency and scalability of the approach is given. Under weak regularity conditions, QuanTA gives accelerated mixing through the temperature space. Empirical evidence of the effectiveness of this new algorithm is illustrated on canonical examples.
We present the first algorithm that samples maxn≥0{Sn − nα}, where Sn is a mean zero random walk, and nα with $\alpha \in ({1 \over 2},1)$ defines a nonlinear boundary. We show that our algorithm has finite expected running time. We also apply this algorithm to construct the first exact simulation method for the steady-state departure process of a GI/GI/∞ queue where the service time distribution has infinite mean.