We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We describe the ergodic properties of some Metropolis–Hastings algorithms for heavy-tailed target distributions. The results of these algorithms are usually analyzed under a subgeometric ergodic framework, but we prove that the mixed preconditioned Crank–Nicolson (MpCN) algorithm has geometric ergodicity even for heavy-tailed target distributions. This useful property comes from the fact that, under a suitable transformation, the MpCN algorithm becomes a random-walk Metropolis algorithm.
Model and parameter uncertainties are common whenever some parametric model is selected to value a derivative instrument. Combining the Monte Carlo method with the Smolyak interpolation algorithm, we propose an accurate efficient numerical procedure to quantify the uncertainty embedded in complex derivatives. Except for the value function being sufficiently smooth with respect to the model parameters, there are no requirements on the payoff or candidate models. Numerical tests carried out quantify the uncertainty of Bermudan put options and down-and-out put options under the Heston model, with each model parameter specified in an interval.
Motivated by the numerical study of spin-boson dynamics in quantum open systems, we present a convergence analysis of the closure approximation for a class of stochastic differential equations. We show that the naive Monte Carlo simulation of the system by direct temporal discretization is not feasible through variance analysis and numerical experiments. We also show that the Wiener chaos expansion exhibits very slow convergence and high computational cost. Though efficient and accurate, the rationale of the moment closure approach remains mysterious. We rigorously prove that the low moments in the moment closure approximation of the considered model are of exponential convergence to the exact result. It is further extended to more general nonlinear problems and applied to the original spin-boson model with similar structure.
We propose a class of numerical methods for solving nonlinear random differential equations with piecewise constant argument, called gPCRK methods as they combine generalised polynomial chaos with Runge-Kutta methods. An error analysis is presented involving the error arising from a finite-dimensional noise assumption, the projection error, the aliasing error and the discretisation error. A numerical example is given to illustrate the effectiveness of this approach.
The deferred correction (DC) method is a classical method for solving ordinary differential equations; one of its key features is to iteratively use lower order numerical methods so that high-order numerical scheme can be obtained. The main advantage of the DC approach is its simplicity and robustness. In this paper, the DC idea will be adopted to solve forward backward stochastic differential equations (FBSDEs) which have practical importance in many applications. Noted that it is difficult to design high-order and relatively “clean” numerical schemes for FBSDEs due to the involvement of randomness and the coupling of the FSDEs and BSDEs. This paper will describe how to use the simplest Euler method in each DC step–leading to simple computational complexity–to achieve high order rate of convergence.
We propose a simple, computationally efficient scheme for an X-ray spectrum simulator. The theoretical models describing the physical processes involved are employed in our Monte Carlo software in a coherent way, paving the way for straightforward future improvements. Our results compare satisfactorily to experimental results from literature and to results from dedicated simulation software. The simplicity, excellent statistical errors, and short execution time of our code recommend it for intensive use in X-ray generation simulations.
In order to study the local refinement issue of the horizontal resolution for a global model with Spherical Centroidal Voronoi Tessellations (SCVTs), the SCVTs are set to 10242 cells and 40962 cells respectively using the density function. The ratio between the grid resolutions in the high and low resolution regions (hereafter RHL) is set to 1:2, 1:3 and 1:4 for 10242 cells and 40962 cells, and the width of the grid transition zone (for simplicity, WTZ) is set to 18° and 9° to investigate their impacts on the model simulation. The ideal test cases, i.e. the cosine bell and global steady-state nonlinear zonal geostrophic flow, are carried out with the above settings. Simulation results showthat the larger the RHL is, the larger the resulting error is. It is obvious that the 1:4 ratio gives rise to much larger errors than the 1:2 or 1:3 ratio; the errors resulting from the WTZ is much smaller than that from the RHL. No significant wave distortion or reflected waves are found when the fluctuation passes through the refinement region, and the error is significantly small in the refinement region. Therefore,when designing a local refinement scheme in the global model with SCVT, the RHL should be less than 1:4, i.e., the error is acceptable when the RHL is 1:2 or 1:3.
The discrepancy function measures the deviation of the empirical distribution of a point set in $[0,1]^{d}$ from the uniform distribution. In this paper, we study the classical discrepancy function with respect to the bounded mean oscillation and exponential Orlicz norms, as well as Sobolev, Besov and Triebel–Lizorkin norms with dominating mixed smoothness. We give sharp bounds for the discrepancy function under such norms with respect to infinite sequences.
We study the construction of symplectic Runge-Kutta methods for stochastic Hamiltonian systems (SHS). Three types of systems, SHS with multiplicative noise, special separable Hamiltonians and multiple additive noise, respectively, are considered in this paper. Stochastic Runge-Kutta (SRK) methods for these systems are investigated, and the corresponding conditions for SRK methods to preserve the symplectic property are given. Based on the weak/strong order and symplectic conditions, some effective schemes are derived. In particular, using the algebraic computation, we obtained two classes of high weak order symplectic Runge-Kutta methods for SHS with a single multiplicative noise, and two classes of high strong order symplectic Runge-Kutta methods for SHS with multiple multiplicative and additive noise, respectively. The numerical case studies confirm that the symplectic methods are efficient computational tools for long-term simulations.
In this paper we uniformly approximate the trajectories of the Cox–Ingersoll–Ross (CIR) process. At a sequence of random times the approximate trajectories will be even exact. In between, the approximation will be uniformly close to the exact trajectory. From a conceptual point of view, the proposed method gives a better quality of approximation in a path-wise sense than standard, or even exact, simulation of the CIR dynamics at some deterministic time grid.
It is well-known that the traditional full integral quadrilateral element fails to provide accurate results to the Helmholtz equation with large wave numbers due to the “pollution error” caused by the numerical dispersion. To overcome this deficiency, this paper proposed an element decomposition method (EDM) for analyzing 2D acoustic problems by using quadrilateral element. In the present EDM, the quadrilateral element is first subdivided into four sub-triangles, and the local acoustic gradient in each sub-triangle is obtained using linear interpolation function. The acoustic gradient field of the whole quadrilateral is then formulated through a weighted averaging operation, which means only one integration point is adopted to construct the system matrix. To cure the numerical instability of one-point integration, a variation gradient item is complemented by variance of the local gradients. The discretized system equations are derived using the generalized Galerkin weakform. Numerical examples demonstrate that the EDM can achieves better accuracy and higher computational efficiency. Besides, as no mapping or coordinate transformation is involved, restrictions on the shape elements can be easily removed, which makes the EDM works well even for severely distorted meshes.
In this paper, we investigate the mean-square convergence of the split-step θ-scheme for nonlinear stochastic differential equations with jumps. Under some standard assumptions, we rigorously prove that the strong rate of convergence of the split-step θ-scheme in strong sense is one half. Some numerical experiments are carried out to assert our theoretical result.
We discuss modelling and simulation of volumetric rainfall in a catchment of the Murray–Darling Basin – an important food production region in Australia that was seriously affected by a recent prolonged drought. Consequently, there has been sustained interest in development of improved water management policies. In order to model accumulated volumetric catchment rainfall over a fixed time period, it is necessary to sum weighted rainfall depths at representative sites within each sub-catchment. Since sub-catchment rainfall may be highly correlated, the use of a Gamma distribution to model rainfall at each site means that catchment rainfall is expressed as a sum of correlated Gamma random variables. We compare four different models and conclude that a joint probability distribution for catchment rainfall constructed by using a copula of maximum entropy is the most effective.
In this paper we estimate quantile sensitivities for dependent sequences via infinitesimal perturbation analysis, and prove asymptotic unbiasedness, weak consistency, and a central limit theorem for the estimators under some mild conditions. Two common cases, the regenerative setting and ϕ-mixing, are analyzed further, and a new batched estimator is constructed based on regenerative cycles for regenerative processes. Two numerical examples, the G/G/1 queue and the Ornstein–Uhlenbeck process, are given to show the effectiveness of the estimator.
The cross entropy is a well-known adaptive importance sampling method which requires estimating an optimal importance sampling distribution within a parametric class. In this paper we analyze an alternative version of the cross entropy, where the importance sampling distribution is selected instead within a general semiparametric class of distributions. We show that the semiparametric cross entropy method delivers efficient estimators in a wide variety of rare-event problems. We illustrate the favourable performance of the method with numerical experiments.
The modified ghost fluid method (MGFM), due to its reasonable treatment for ghost fluid state, has been shown to be robust and efficient when applied to compressible multi-medium flows. Other feasible definitions of the ghost fluid state, however, have yet to be systematically presented. By analyzing all possible wave structures and relations for a multi-medium Riemann problem, we derive all the conditions to define the ghost fluid state. Under these conditions, the solution in the real fluid region can be obtained exactly, regardless of the wave pattern in the ghost fluid region. According to the analysis herein, a practical ghost fluid method (PGFM) is proposed to simulate compressible multi-medium flows. In contrast with the MGFM where three degrees of freedomat the interface are required to define the ghost fluid state, only one degree of freedomis required in this treatment. However, when these methods proved correct in theory are used in computations for the multi-medium Riemann problem, numerical errors at the material interface may be inevitable. We show that these errors are mainly induced by the single-medium numerical scheme in essence, rather than the ghost fluid method itself. Equipped with some density-correction techniques, the PGFM is found to be able to suppress these unphysical solutions dramatically.
We study the exciton diffusion in organic semiconductors from a macroscopic viewpoint. In a unified way, we conduct the equivalence analysis between Monte-Carlo method and diffusion equation model for photoluminescence quenching and photocurrent spectrum measurements, in both the presence and the absence of Förster energy transfer effect. Connections of these two models to Stern-Volmer method and exciton-exciton annihilation method are also specified for the photoluminescence quenching measurement.
By introducing a new Gaussian process and a new compensated Poisson random measure, we propose an explicit prediction-correction scheme for solving decoupled forward backward stochastic differential equations with jumps (FBSDEJs). For this scheme, we first theoretically obtain a general error estimate result, which implies that the scheme is stable. Then using this result, we rigorously prove that the accuracy of the explicit scheme can be of second order. Finally, we carry out some numerical experiments to verify our theoretical results.
Let (X1,...,Xn) be multivariate normal, with mean vector 𝛍 and covariance matrix 𝚺, and let Sn=eX1+⋯+eXn. The Laplace transform ℒ(θ)=𝔼e-θSn∝∫exp{-hθ(𝒙)}d𝒙 is represented as ℒ̃(θ)I(θ), where ℒ̃(θ) is given in closed form and I(θ) is the error factor (≈1). We obtain ℒ̃(θ) by replacing hθ(𝒙) with a second-order Taylor expansion around its minimiser 𝒙*. An algorithm for calculating the asymptotic expansion of 𝒙* is presented, and it is shown that I(θ)→ 1 as θ→∞. A variety of numerical methods for evaluating I(θ) is discussed, including Monte Carlo with importance sampling and quasi-Monte Carlo. Numerical examples (including Laplace-transform inversion for the density of Sn) are also given.