We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we consider the random-scan symmetric random walk Metropolis algorithm (RSM) on ℝd. This algorithm performs a Metropolis step on just one coordinate at a time (as opposed to the full-dimensional symmetric random walk Metropolis algorithm, which proposes a transition on all coordinates at once). We present various sufficient conditions implying V-uniform ergodicity of the RSM when the target density decreases either subexponentially or exponentially in the tails.
An algorithm is developed for exact simulation from distributions that are defined as fixed points of maps between spaces of probability measures. The fixed points of the class of maps under consideration include examples of limit distributions of random variables studied in the probabilistic analysis of algorithms. Approximating sequences for the densities of the fixed points with explicit error bounds are constructed. The sampling algorithm relies on a modified rejection method.
Simulated annealing is a popular and much studied method for maximizing functions on finite or compact spaces. For noncompact state spaces, the method is still sound, but convergence results are scarce. We show here how to prove convergence in such cases, for Markov chains satisfying suitable drift and minorization conditions.
A new procedure that generates the transient solution of the first moment of the state of a Markovian queueing network with state-dependent arrivals, services, and routeing is developed. The procedure involves defining a partial differential equation that relates an approximate multivariate cumulant generating function to the intensity functions of the network. The partial differential equation then yields a set of ordinary differential equations which are numerically solved to obtain the first moment.
We consider a continuous-time Markov additive process (Jt,St) with (Jt) an irreducible Markov chain on E = {1,…,s}; it is known that (St/t) satisfies the large deviations principle as t → ∞. In this paper we present a variational formula H for the rate function κ∗ and, in some sense, we have a composition of two large deviations principles. Moreover, under suitable hypotheses, we can consider two other continuous-time Markov additive processes derived from (Jt,St): the averaged parameters model (Jt,St(A)) and the fluid model (Jt,St(F)). Then some results of convergence are presented and the variational formula H can be employed to show that, in some sense, the convergences for (Jt,St(A)) and (Jt,St(F)) are faster than the corresponding convergences for (Jt,St).
The coupon subset collection problem is a generalization of the classical coupon collecting problem, in that rather than collecting individual coupons we obtain, at each time point, a random subset of coupons. The problem of interest is to determine the expected number of subsets needed until each coupon is contained in at least one of these subsets. We provide bounds on this number, give efficient simulation procedures for estimating it, and then apply our results to a reliability problem.
Let X = (X(t):t ≥ 0) be a Lévy process and X∊ the compensated sum of jumps not exceeding ∊ in absolute value, σ2(∊) = var(X∊(1)). In simulation, X - X∊ is easily generated as the sum of a Brownian term and a compound Poisson one, and we investigate here when X∊/σ(∊) can be approximated by another Brownian term. A necessary and sufficient condition in terms of σ(∊) is given, and it is shown that when the condition fails, the behaviour of X∊/σ(∊) can be quite intricate. This condition is also related to the decay of terms in series expansions. We further discuss error rates in terms of Berry-Esseen bounds and Edgeworth approximations.
Wang and Pötzelberger (1997) derived an explicit formula for the probability that a Brownian motion crosses a one-sided piecewise linear boundary and used this formula to approximate the boundary crossing probability for general nonlinear boundaries. The present paper gives a sharper asymptotic upper bound of the approximation error for the formula, and generalizes the results to two-sided boundaries. Numerical computations are easily carried out using the Monte Carlo simulation method. A rule is proposed for choosing optimal nodes for the approximating piecewise linear boundaries, so that the corresponding approximation errors of boundary crossing probabilities converge to zero at a rate of O(1/n2).
The paper reviews the formulation of the linked stress release model for large scale seismicity together with aspects of its application. Using data from Taiwan for illustrative purposes, models can be selected and verified using tools that include Akaike's information criterion (AIC), numerical analysis, residual point processes and Monte Carlo simulation.
It is proved that the strong Doeblin condition (i.e., ps(x,y) ≥ asπ(y) for all x,y in the state space) implies convergence in the relative supremum norm for a general Markov chain. The convergence is geometric with rate (1 - as)1/s. If the detailed balance condition and a weak continuity condition are satisfied, then the strong Doeblin condition is equivalent to convergence in the relative supremum norm. Convergence in other norms under weaker assumptions is proved. The results give qualitative understanding of the convergence.
Two related individuals are identical by descent at a genetic locus if they share the same gene copy at that locus due to inheritance from a recent common ancestor. We consider idealized continuous identity by descent (IBD) data in which IBD status is known continuously along chromosomes. IBD data contains information about the relationship between the two individuals, and about the underlying crossover processes. We present a Monte Carlo method for calculating probabilities for IBD data. The method is not restricted to Haldane's Poisson process model of crossing-over but may be used with other models including the chi-square, Kosambi renewal and Sturt models. Results of a simulation study demonstrate that IBD data can be used to distinguish between alternative models for the crossover process.
This paper proposes and analyzes discrete-time approximations to a class of diffusions, with an emphasis on preserving certain important features of the continuous-time processes in the approximations. We start with multivariate diffusions having three features in particular: they are martingales, each of their components evolves within the unit interval, and the components are almost surely ordered. In the models of the term structure of interest rates that motivate our investigation, these properties have the important implications that the model is arbitrage-free and that interest rates remain positive. In practice, numerical work with such models often requires Monte Carlo simulation and thus entails replacing the original continuous-time model with a discrete-time approximation. It is desirable that the approximating processes preserve the three features of the original model just noted, though standard discretization methods do not. We introduce new discretizations based on first applying nonlinear transformations from the unit interval to the real line (in particular, the inverse normal and inverse logit), then using an Euler discretization, and finally applying a small adjustment to the drift in the Euler scheme. We verify that these methods enforce important features in the discretization with no loss in the order of convergence (weak or strong). Numerical results suggest that these methods can also yield a better approximation to the law of the continuous-time process than does a more standard discretization.
We consider adaptive importance sampling for a Markov chain with scoring. It is shown that convergence to the zero-variance importance sampling chain for the mean total score occurs exponentially fast under general conditions. These results extend previous work in Kollman (1993) and in Kollman et al. (1999) for finite state spaces.
This article describes new estimates for the second largest eigenvalue in absolute value of reversible and ergodic Markov chains on finite state spaces. These estimates apply when the stationary distribution assigns a probability higher than 0.702 to some given state of the chain. Geometric tools are used. The bounds mainly involve the isoperimetric constant of the chain, and hence generalize famous results obtained for the second eigenvalue. Comparison estimates are also established, using the isoperimetric constant of a reference chain. These results apply to the Metropolis-Hastings algorithm in order to solve minimization problems, when the probability of obtaining the solution from the algorithm can be chosen beforehand. For these dynamics, robust bounds are obtained at moderate levels of concentration.
A parametric family of completely random measures, which includes gamma random measures, positive stable random measures as well as inverse Gaussian measures, is defined. In order to develop models for clustered point patterns with dependencies between points, the family is used in a shot-noise construction as intensity measures for Cox processes. The resulting Cox processes are of Poisson cluster process type and include Poisson processes and ordinary Neyman-Scott processes.
We show characteristics of the completely random measures, illustrated by simulations, and derive moment and mixing properties for the shot-noise random measures. Finally statistical inference for shot-noise Cox processes is considered and some results on nearest-neighbour Markov properties are given.
In this paper, we give necessary and sufficient conditions to ensure the validity of confidence intervals, based on the central limit theorem, in simulations of highly reliable Markovian systems. We resort to simulations because of the frequently huge state space in practical systems. So far the literature has focused on the property of bounded relative error. In this paper we focus on ‘bounded normal approximation’ which asserts that the approximation of the normal law, suggested by the central limit theorem, does not deteriorate as the reliability of the system increases. Here we see that the set of systems with bounded normal approximation is (strictly) included in the set of systems with bounded relative error.
We study a class of simulated annealing type algorithms for global minimization with general acceptance probabilities. This paper presents simple conditions, easy to verify in practice, which ensure the convergence of the algorithm to the global minimum with probability 1.
The filtering problem concerns the estimation of a stochastic process X from its noisy partial information Y. With the notable exception of the linear-Gaussian situation, general optimal filters have no finitely recursive solution. The aim of this work is the design of a Monte Carlo particle system approach to solve discrete time and nonlinear filtering problems. The main result is a uniform convergence theorem. We introduce a concept of regularity and we give a simple ergodic condition on the signal semigroup for the Monte Carlo particle filter to converge in law and uniformly with respect to time to the optimal filter, yielding what seems to be the first uniform convergence result for a particle approximation of the nonlinear filtering equation.
An explicit formula for the probability that a Brownian motion crosses a piecewise linear boundary in a finite time interval is derived. This formula is used to obtain approximations to the crossing probabilities for general boundaries which are the uniform limits of piecewise linear functions. The rules for assessing the accuracies of the approximations are given. The calculations of the crossing probabilities are easily carried out through Monte Carlo methods. Some numerical examples are provided.
A rectangular tessellation is a covering of the plane by non-overlapping rectangles. A basic theory for general homogeneous random rectangular tessellations is developed, and it is shown that many first-order mean values may be expressed in terms of just three basic quantities. Corresponding values for independent superpositions of two or more such tessellations are derived. The most interesting homogeneous rectangular tessellations are those with only T-vertices (i.e. no X-vertices). Gilbert's (1967) isotropic model adapted to this two-orthogonal-orientations case, although simply specified, appears theoretically intractable, due to a complex ‘blocking' effect. However, the approximating penetration model, also introduced by Gilbert, is found to be both tractable and informative about the true model. A multi-stage method for simulating the model is developed, and the distributions of important characteristics estimated.