We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper statistical properties of estimators of drift parameters for diffusion processes are studied by modern numerical methods for stochastic differential equations. This is a particularly useful method for discrete time samples, where estimators can be constructed by making discrete time approximations to the stochastic integrals appearing in the maximum likelihood estimators for continuously observed diffusions. A review is given of the necessary theory for parameter estimation for diffusion processes and for simulation of diffusion processes. Three examples are studied.
We establish a necessary condition for any importance sampling scheme to give bounded relative error when estimating a performance measure of a highly reliable Markovian system. Also, a class of importance sampling methods is defined for which we prove a necessary and sufficient condition for bounded relative error for the performance measure estimator. This class of probability measures includes all of the currently existing failure biasing methods in the literature. Similar conditions for derivative estimators are established.
In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ergodic Harris-recurrent Markov chain is differentiable in a certain strong sense. The approach is based on likelihood ratio ‘change-of-measure' arguments, and leads directly to a ‘likelihood ratio gradient estimator' that can be computed numerically.
Let ψ(u) be the ruin probability in a risk process with initial reserve u, Poisson arrival rate β, claim size distribution B and premium rate p(x) at level x of the reserve. Let y(x) be the non-zero solution of the local Lundberg equation . It is shown that is non-decreasing and that log ψ(u) ≈ –I(u) in a slow Markov walk limit. Though the results and conditions are of large deviations type, the proofs are elementary and utilize piecewise comparisons with standard risk processes with a constant p. Also simulation via importance sampling using local exponential change of measure defined in terms of the γ(x) is discussed and some numerical results are presented.
We suggest a new universal method of stochastic simulation, allowing us to generate rather efficiently random vectors with arbitrary densities in a connected open region or on its boundary. Our method belongs to the class of dynamic Monte Carlo procedures and is based on a special construction of a Markov chain on the boundary of the region. Its remarkable feature is that this chain admits a simple simulation, based on a universal (depending only on the dimensionality of the space) stochastic driver.
We propose an AR(1) model that can be used to generate logistic processes. The proposed model has simple probability and correlation structure that can accommodate the full range of attainable correlation. The correlation structure and the joint distribution of the proposed model are given, as well as their conditional mean and variance.
Likelihood ratios are used in computer simulation to estimate expectations with respect to one law from simulation of another. This importance sampling technique can be implemented with either the likelihood ratio at the end of the simulated time horizon or with a sequence of likelihood ratios at intermediate times. Since a likelihood ratio process is a martingale, the intermediate values are conditional expectations of the final value and their use introduces no bias.
We provide conditions under which using conditional expectations in this way brings guaranteed variance reduction. We use stochastic orderings to get positive dependence between a process and its likelihood ratio, from which variance reduction follows. Our analysis supports the following rough statement: for increasing functionals of associated processes with monotone likelihood ratio, conditioning helps. Examples are drawn from recursively defined processes, Markov chains in discrete and continuous time, and processes with Poisson input.
Given a parametric family of regenerative processes on a common probability space, we investigate when the derivatives (with respect to the parameter) are regenerative. We primarily consider sequences satisfying explicit, Lipschitz recursions, such as the waiting times in many queueing systems, and show that derivatives regenerate together with the original sequence under reasonable monotonicity or continuity assumptions. The inputs to our recursions are i.i.d. or, more generally, governed by a Harris-ergodic Markov chain. For i.i.d. input we identify explicit regeneration points; otherwise, we use coupling arguments. We give conditions for the expected steady-state derivative to be the derivative of the steady-state mean of the original sequence. Under these conditions, the derivative of the steady-state mean has a cycle-formula representation.
We study a class of simulated annealing algorithms for global minimization of a continuous function defined on a subset of We consider the case where the selection Markov kernel is absolutely continuous and has a density which is uniformly bounded away from 0. This class includes certain simulated annealing algorithms recently introduced by various authors. We show that, under mild conditions, the sequence of states generated by these algorithms converges in probability to the global minimum of the function. Unlike most previous studies where the cooling schedule is deterministic, our cooling schedule is allowed to be adaptive. We also address the issue of almost sure convergence versus convergence in probability.
The work is concerned with the first-order linear autoregressive process which has a rectangular stationary marginal distribution. A derivation is given of the result that the time-reversed version is deterministic, with a first-order recursion function of the type used in multiplicative congruential random number generators, scaled to the unit interval. The uniformly distributed sequence generated is chaotic, giving an instance of a chaotic process which when reversed has a linear causal and non-chaotic structure. An mk-valued discrete process is then introduced which resembles a first-order linear autoregressive model and uses k-adic arithmetic. It is a particular form of moving-average process, and when reversed approximates in m a non-linear discrete-valued process which has the congruential generator function as its deterministic part, plus a discrete-valued noise component. The process is illustrated by scatter plots of adjacent values, time series plots and directed scatter plots (phase diagrams). The behaviour very much depends on the adic number, with k = 2 being very distinctly non-linear and k = 10 being virtually indistinguishable from independence.
Let X1, X2, · ·· be independent and identically distributed random variables such that ΕΧ1 < 0 and P(X1 ≥ 0) ≥ 0. Fix M ≥ 0 and let T = inf {n: X1 + X2 + · ·· + Xn ≥ M} (T = +∞, if for every n = 1,2, ···). In this paper we consider the estimation of the level-crossing probabilities P(T <∞) and , by using Monte Carlo simulation and especially importance sampling techniques. When using importance sampling, precision and efficiency of the estimation depend crucially on the choice of the simulation distribution. For this choice we introduce a new criterion which is of the type of large deviations theory; consequently, the basic large deviations theory is the main mathematical tool of this paper. We allow a wide class of possible simulation distributions and, considering the case that M →∞, we prove asymptotic optimality results for the simulation of the probabilities P(T <∞) and . The paper ends with an example.
We derive conditions under which the increments of a vector process are associated — i.e. under which all pairs of increasing functions of the increments are positively correlated. The process itself is associated if it is generated by a family of associated and monotone kernels. We show that the increments are associated if the kernels are associated and, in a suitable sense, convex. In the Markov case, we note a connection between associated increments and temporal stochastic convexity.
Our analysis is motivated by a question in variance reduction: assuming that a normalized process and its normalized compensator converge to the same value, which is the better estimator of that limit? Under some additional hypotheses we show that, for processes with conditionally associated increments, the compensator has smaller asymptotic variance.
This paper studies computer simulation methods for estimating the sensitivities (gradient, Hessian etc.) of the expected steady-state performance of a queueing model with respect to the vector of parameters of the underlying distribution (an example is the gradient of the expected steady-state waiting time of a customer at a particular node in a queueing network with respect to its service rate). It is shown that such a sensitivity can be represented as the covariance between two processes, the standard output process (say the waiting time process) and what we call the score function process which is based on the score function. Simulation procedures based upon such representations are discussed, and in particular a control variate method is presented. The estimators and the score function process are then studied under heavy traffic conditions. The score function process, when properly normalized, is shown to have a heavy traffic limit involving a certain variant of two-dimensional Brownian motion for which we describe the stationary distribution. From this, heavy traffic (diffusion) approximations for the variance constants in the large sample theory can be computed and are used as a basis for comparing different simulation estimators. Finally, the theory is supported by numerical results.