We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In their 1960 book on finite Markov chains, Kemeny and Snell established that a certain sum is invariant. The value of this sum has become known as Kemeny’s constant. Various proofs have been given over time, some more technical than others. We give here a very simple physical justification, which extends without a hitch to continuous-time Markov chains on a finite state space. For Markov chains with denumerably infinite state space, the constant may be infinite and even if it is finite, there is no guarantee that the physical argument will hold. We show that the physical interpretation does go through for the special case of a birth-and-death process with a finite value of Kemeny’s constant.
Partial differential equations are powerful tools for used to characterizing various physical systems. In practice, measurement errors are often present and probability models are employed to account for such uncertainties. In this paper we present a Monte Carlo scheme that yields unbiased estimators for expectations of random elliptic partial differential equations. This algorithm combines a multilevel Monte Carlo method (Giles (2008)) and a randomization scheme proposed by Rhee and Glynn (2012), (2013). Furthermore, to obtain an estimator with both finite variance and finite expected computational cost, we employ higher-order approximations.
This study investigates the phenomenon of targeted energy transfer (TET) from a linear oscillator to a nonlinear attachment behaving as a nonlinear energy sink for both transient and stochastic excitations. First, the dynamics of the underlying Hamiltonian system under deterministic transient loading is studied. Assuming that the transient dynamics can be partitioned into slow and fast components, the governing equations of motion corresponding to the slow flow dynamics are derived and the behaviour of the system is analysed. Subsequently, the effect of noise on the slow flow dynamics of the system is investigated. The Itô stochastic differential equations for the noisy system are derived and the corresponding Fokker–Planck equations are numerically solved to gain insights into the behaviour of the system on TET. The effects of the system parameters as well as noise intensity on the optimal regime of TET are studied. The analysis reveals that the interaction of nonlinearities and noise enhances the optimal TET regime as predicted in deterministic analysis.
We consider the stationary solution Z of the Markov chain {Zn}n∈ℕ defined by Zn+1=ψn+1(Zn), where {ψn}n∈ℕ is a sequence of independent and identically distributed random Lipschitz functions. We estimate the probability of the event {Z>x} when x is large, and develop a state-dependent importance sampling estimator under a set of assumptions on ψn such that, for large x, the event {Z>x} is governed by a single large jump. Under natural conditions, we show that our estimator is strongly efficient. Special attention is paid to a class of perpetuities with heavy tails.
We develop a forward-reverse expectation-maximization (FREM) algorithm for estimating parameters of a discrete-time Markov chain evolving through a certain measurable state-space. For the construction of the FREM method, we develop forward-reverse representations for Markov chains conditioned on a certain terminal state. We prove almost sure convergence of our algorithm for a Markov chain model with curved exponential family structure. On the numerical side, we carry out a complexity analysis of the forward-reverse algorithm by deriving its expected cost. Two application examples are discussed.
We present the first exact simulation method for multidimensional reflected Brownian motion (RBM). Exact simulation in this setting is challenging because of the presence of correlated local-time-like terms in the definition of RBM. We apply recently developed so-called ε-strong simulation techniques (also known as tolerance-enforced simulation) which allow us to provide a piecewise linear approximation to RBM with ε (deterministic) error in uniform norm. A novel conditional acceptance–rejection step is then used to eliminate the error. In particular, we condition on a suitably designed information structure so that a feasible proposal distribution can be applied.
In this paper we consider the problem of simultaneously estimating rare-event probabilities for a class of Gaussian random fields. A conventional rare-event simulation method is usually tailored to a specific rare event and consequently would lose estimation efficiency for different events of interest, which often results in additional computational cost in such simultaneous estimation problems. To overcome this issue, we propose a uniformly efficient estimator for a general family of Hölder continuous Gaussian random fields. We establish the asymptotic and uniform efficiency of the proposed method and also conduct simulation studies to illustrate its effectiveness.
In this paper we obtain a recursive formula for the density of the double-barrier Parisian stopping time. We present a probabilistic proof of the formula for the first few steps of the recursion, and then a formal proof using explicit Laplace inversions. These results provide an efficient computational method for pricing double-barrier Parisian options.
A prevalent problem in general state-space models is the approximation of the smoothing distribution of a state conditional on the observations from the past, the present, and the future. The aim of this paper is to provide a rigorous analysis of such approximations of smoothed distributions provided by the two-filter algorithms. We extend the results available for the approximation of smoothing distributions to these two-filter approaches which combine a forward filter approximating the filtering distributions with a backward information filter approximating a quantity proportional to the posterior distribution of the state, given future observations.
In this paper we consider the optimal scaling of high-dimensional random walk Metropolis algorithms for densities differentiable in the Lp mean but which may be irregular at some points (such as the Laplace density, for example) and/or supported on an interval. Our main result is the weak convergence of the Markov chain (appropriately rescaled in time and space) to a Langevin diffusion process as the dimension d goes to ∞. As the log-density might be nondifferentiable, the limiting diffusion could be singular. The scaling limit is established under assumptions which are much weaker than the one used in the original derivation of Roberts et al. (1997). This result has important practical implications for the use of random walk Metropolis algorithms in Bayesian frameworks based on sparsity inducing priors.
Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
A numerical comparison of the Monte Carlo (MC) simulation and the finite-difference method for pricing European options under a regime-switching framework is presented in this paper. We consider pricing options on stocks having two to four volatility regimes. Numerical results show that the MC simulation outperforms the Crank–Nicolson (CN) finite-difference method in both the low-frequency case and the high-frequency case. Even though both methods have linear growth, as the number of regimes increases, the computational time of CN grows much faster than that of MC. In addition, for the two-state case, we propose a much faster simulation algorithm whose computational time is almost independent of the switching frequency. We also investigate the performances of two variance-reduction techniques: antithetic variates and control variates, to further improve the efficiency of the simulation.
This paper is devoted to numerical methods for mean-field stochastic differential equations (MSDEs). We first develop the mean-field Itô formula and mean-field Itô-Taylor expansion. Then based on the new formula and expansion, we propose the Itô-Taylor schemes of strong order γ and weak order η for MSDEs, and theoretically obtain the convergence rate γ of the strong Itô-Taylor scheme, which can be seen as an extension of the well-known fundamental strong convergence theorem to the mean-field SDE setting. Finally some numerical examples are given to verify our theoretical results.
Markov chain Monte Carlo (MCMC) methods provide an essential tool in statistics for sampling from complex probability distributions. While the standard approach to MCMC involves constructing discrete-time reversible Markov chains whose transition kernel is obtained via the Metropolis–Hastings algorithm, there has been recent interest in alternative schemes based on piecewise deterministic Markov processes (PDMPs). One such approach is based on the zig-zag process, introduced in Bierkens and Roberts (2016), which proved to provide a highly scalable sampling scheme for sampling in the big data regime; see Bierkens et al. (2016). In this paper we study the performance of the zig-zag sampler, focusing on the one-dimensional case. In particular, we identify conditions under which a central limit theorem holds and characterise the asymptotic variance. Moreover, we study the influence of the switching rate on the diffusivity of the zig-zag process by identifying a diffusion limit as the switching rate tends to ∞. Based on our results we compare the performance of the zig-zag sampler to existing Monte Carlo methods, both analytically and through simulations.
In error estimates of various numerical approaches for solving decoupled forward backward stochastic differential equations (FBSDEs), the rate of convergence for one variable is usually less than for the other. Under slightly strengthened smoothness assumptions, we show that the fully discrete Euler scheme admits a first-order rate of convergence for both variables.
This paper presents comprehensive studies on two closely related problems of high speed collisionless gaseous jet from a circular exit and impinging on an inclined rectangular flat plate, where the plate surface can be diffuse or specular reflective. Gaskinetic theories are adopted to study the problems, and several crucial geometry-location and velocity-direction relations are used. The final complete results include flowfield properties such as density, velocity components, temperature and pressure, and impingement surface properties such as coefficients of pressure, shear stress and heat flux. Also included are the averaged coefficients for pressure, friction, heat flux, moment over the whole plate, and the averaged distance from the moment center to the plate center. The final results include complex but accurate integrations involving the geometry and specific speed ratios, inclination angle, and the temperature ratio. Several numerical simulations with the direct simulation Monte Carlo method validate these analytical results, and the results are essentially identical. Exponential, trigonometric, and error functions are embedded in the solutions. The results illustrate that the past simple cosine function approach is rather crude, and should be used cautiously. The gaskinetic method and processes are heuristic and can be used to investigate other external high Knudsen number impingement flow problems, including the flowfield and surface properties for high Knudsen number jet from an exit and flat plate of arbitrary shapes. The results are expected to find many engineering applications.
We present an analysis of convergence of a quasi-regression Monte Carlo method proposed by Glasserman and Yu (2004). We show that the method surely converges to the true price of an American option even under multiple underlyings via polynomial chaos expansion and weaker conditions than those used in Glasserman and Yu (2004). Further, we show the number of simulation paths grows exponentially in the number of basis functions to obtain convergence in implementing the method. Finally, we propose a rate of convergence considering regularity of value functions.
Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.