We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In a repulsive point process, points act as if they are repelling one another, leading to underdispersed configurations when compared to a standard Poisson point process. Such models are useful when competition for resources exists, as in the locations of towns and trees. Bertil Matérn introduced three models for repulsive point processes, referred to as types I, II, and III. Matérn used types I and II, and regarded type III as intractable. In this paper an algorithm is developed that allows for arbitrarily accurate approximation of the likelihood for data modeled by the Matérn type-III process. This method relies on a perfect simulation method that is shown to be fast in practice, generating samples in time that grows nearly linearly in the intensity parameter of the model, while the running times for more naive methods grow exponentially.
The waste-recycling Monte Carlo (WRMC) algorithm introduced by physicists is a modification of the (multi-proposal) Metropolis–Hastings algorithm, which makes use of all the proposals in the empirical mean, whereas the standard (multi-proposal) Metropolis–Hastings algorithm uses only the accepted proposals. In this paper we extend the WRMC algorithm to a general control variate technique and exhibit the optimal choice of the control variate in terms of the asymptotic variance. We also give an example which shows that, in contradiction to the intuition of physicists, the WRMC algorithm can have an asymptotic variance larger than that of the Metropolis–Hastings algorithm. However, in the particular case of the Metropolis–Hastings algorithm called the Boltzmann algorithm, we prove that the WRMC algorithm is asymptotically better than the Metropolis–Hastings algorithm. This last property is also true for the multi-proposal Metropolis–Hastings algorithm. In this last framework we consider a linear parametric generalization of WRMC, and we propose an estimator of the explicit optimal parameter using the proposals.
In this paper we study efficient simulation algorithms for estimating P(X›x), where X is the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description of the conditional distribution of T given X›x. If T≡t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. If T is random, particular attention is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using these asymptotic descriptions have bounded relative error as x→∞ when combined with the ideas used for a fixed t. Nevertheless, we give examples of algorithms carefully designed to enjoy bounded relative error that may provide little or no asymptotic improvement over crude Monte Carlo simulation when the computational effort is taken into account. To resolve this problem, an alternative algorithm using two-sided Lundberg bounds is suggested.
The standard Markov chain Monte Carlo method of estimating an expected value is to generate a Markov chain which converges to the target distribution and then compute correlated sample averages. In many applications the quantity of interest θ is represented as a product of expected values, θ = µ1 ⋯ µk, and a natural estimator is a product of averages. To increase the confidence level, we can compute a median of independent runs. The goal of this paper is to analyze such an estimator , i.e. an estimator which is a ‘median of products of averages’ (MPA). Sufficient conditions are given for to have fixed relative precision at a given level of confidence, that is, to satisfy . Our main tool is a new bound on the mean-square error, valid also for nonreversible Markov chains on a finite state space.
We study the discrete-time approximation of doubly reflected backward stochastic differential equations (BSDEs) in a multidimensional setting. As in Ma and Zhang (2005) or Bouchard and Chassagneux (2008), we introduce the discretely reflected counterpart of these equations. We then provide representation formulae which allow us to obtain new regularity results. We also propose an Euler scheme type approximation and give new convergence results for both discretely and continuously reflected BSDEs.
We consider Monte Carlo methods for the classical nonlinear filtering problem. The first method is based on a backward pathwise filtering equation and the second method is related to a backward linear stochastic partial differential equation. We study convergence of the proposed numerical algorithms. The considered methods have such advantages as a capability in principle to solve filtering problems of large dimensionality, reliable error control, and recurrency. Their efficiency is achieved due to the numerical procedures which use effective numerical schemes and variance reduction techniques. The results obtained are supported by numerical experiments.
Given a set of points in the plane, the problem of existence and finding the least absolute deviations line is considered. The most important properties are stated and proved and two efficient methods for finding the best least absolute deviations line are proposed. Compared to other known methods, our proposed methods proved to be considerably more efficient.
A weighted graph G is a pair (V, ℰ) containing vertex set V and edge set ℰ, where each edge e ∈ ℰ is associated with a weight We. A subgraph of G is a forest if it has no cycles. All forests on the graph G form a probability space, where the probability of each forest is proportional to the product of the weights of its edges. This paper aims to simulate forests exactly from the target distribution. Methods based on coupling from the past (CFTP) and rejection sampling are presented. Comparisons of these methods are given theoretically and via simulation.
While the convergence properties of many sampling selection methods can be proven, there is one particular sampling selection method introduced in Baker (1987), closely related to ‘systematic sampling’ in statistics, that has been exclusively treated on an empirical basis. The main motivation of the paper is to start to study formally its convergence properties, since in practice it is by far the fastest selection method available. We will show that convergence results for the systematic sampling selection method are related to properties of peculiar Markov chains.
We extend a result due to Zazanis (1992) on the analyticity of the expectation of suitable functionals of homogeneous Poisson processes with respect to the intensity of the process. As our main result, we provide Monte Carlo estimators for the derivatives. We apply our results to stochastic models which are of interest in stochastic geometry and insurance.
The paper deals with the asymptotic behavior of the bridge of a Gaussian process conditioned to stay in n fixed points at n fixed past instants. In particular, functional large deviation results are stated for small time. Several examples are considered: integrated or not fractional Brownian motions and m-fold integrated Brownian motion. As an application, the asymptotic behavior of the exit probability is studied and used for the practical purpose of the numerical computation, via Monte Carlo methods, of the hitting probability up to a given time of the unpinned process.
We investigate the problem of using a Riemannian sum with random subintervals to approximate the iterated Itô integral ∫wdw - or, equivalently, solving the corresponding stochastic differential equation by Euler's method with variable step sizes. In the past this task has been used as a counterexample to illustrate that variable step sizes must be used with extreme caution in stochastic numerical analysis. This article establishes a class of variable step size schemes which do work.
The problem of finding the probability distribution of the first hitting time of a double integral process (DIP) such as the integrated Wiener process (IWP) has been an important and difficult endeavor in stochastic calculus. It has applications in many fields of physics (first exit time of a particle in a noisy force field) or in biology and neuroscience (spike time distribution of an integrate-and-fire neuron with exponentially decaying synaptic current). The only results available are an approximation of the stationary mean crossing time and the distribution of the first hitting time of the IWP to a constant boundary. We generalize these results and find an analytical formula for the first hitting time of the IWP to a continuous piecewise-cubic boundary. We use this formula to approximate the law of the first hitting time of a general DIP to a smooth curved boundary, and we provide an estimation of the convergence of this method. The accuracy of the approximation is computed in the general case for the IWP and the effective calculation of the crossing probability can be carried out through a Monte Carlo method.
In the framework of patterns in random texts, the Markov chain embedding techniques consist of turning the occurrences of a pattern over an order-m Markov sequence into those of a subset of states into an order-1 Markov chain. In this paper we use the theory of language and automata to provide space-optimal Markov chain embedding using the new notion of pattern Markov chains (PMCs), and we give explicit constructive algorithms to build the PMC associated to any given pattern problem. The interest of PMCs is then illustrated through the exact computation of P-values whose complexity is discussed and compared to other classical asymptotic approximations. Finally, we consider two illustrative examples of highly degenerated pattern problems (structured motifs and PROSITE signatures), which further illustrate the usefulness of our approach.
The target measure μ is the distribution of a random vector in a box ℬ, a Cartesian product of bounded intervals. The Gibbs sampler is a Markov chain with invariant measure μ. A ‘coupling from the past’ construction of the Gibbs sampler is used to show ergodicity of the dynamics and to perfectly simulate μ. An algorithm to sample vectors with multinormal distribution truncated to ℬ is then implemented.
We develop an integration by parts technique for point processes, with application to the computation of sensitivities via Monte Carlo simulations in stochastic models with jumps. The method is applied to density estimation with respect to the Lebesgue measure via a modified kernel estimator which is less sensitive to variations of the bandwidth parameter than standard kernel estimators. This applies to random variables whose densities are not analytically known and requires the knowledge of the point process jump times.
In this paper we are interested in a nonlinear parabolic evolution equation occurring in rheology. We give a probabilistic interpretation to this equation by associating a nonlinear martingale problem with it. We prove the existence of a unique solution, P, to this martingale problem. For any t, the time marginal of P at time t admits a density ρ(t,x) with respect to the Lebesgue measure, where the function ρ is the unique weak solution to the evolution equation in a well-chosen energy space. Next we introduce a simulable system of n interacting particles and prove that the empirical measure of this system converges to P as n tends to ∞. This propagation-of-chaos result ensures that the solution to the equation of interest can be approximated using a Monte Carlo method. Finally, we illustrate the convergence in some numerical experiments.
We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of large numbers. We also give counterexamples to demonstrate that the assumptions we make are not redundant.
We present bounds on the decay parameter for absorbing birth–death processes adapted from results of Chen (2000), (2001). We address numerical issues associated with computing these bounds, and assess their accuracy for several models, including the stochastic logistic model, for which estimates of the decay parameter have been obtained previously by Nåsell (2001).