To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Some consequences of restarting stochastic search algorithms are studied. It is shown under reasonable conditions that restarting when certain patterns occur yields probabilities that the goal state has not been found by the nth epoch which converge to zero at least geometrically fast in n. These conditions are shown to hold for restarted simulated annealing employing a local generation matrix, a cooling schedule Tn ∼ c/n and restarting after a fixed number r + 1 of duplications of energy levels of states when r is sufficiently large. For simulated annealing with logarithmic cooling these probabilities cannot decrease to zero this fast. Numerical comparisons between restarted simulated annealing and several modern variations on simulated annealing are also presented and in all cases the former performs better.
A given number of bullets will be fired sequentially in an attempt to destroy as many targets as possible from a fixed number of targets. The probability of destroying a target at each shot is known. After each shot, there is a report on the state for the target; destroyed or intact. The reports are subject to the usual two types of errors and the probabilities of making these errors are also known. This paper shows that the myopic decision strategy that picks the next target to be the one with the highest intact posterior probability is the optimal strategy.
In generalization of the well-known formulae for quermass densities of stationary and isotropic Boolean models, we prove corresponding results for densities of mixed volumes in the stationary situation and show how they can be used to determine the intensity of non-isotropic Boolean models Z in d-dimensional space for d = 2, 3, 4. We then consider non-stationary Boolean models and extend results of Fallert on quermass densities to densities of mixed volumes. In particular, we present explicit formulae for a planar inhomogeneous Boolean model with circular grains.
The (unoriented or oriented) mean normal measure of a stationary process of convex particles carries information on the mean shape of the particles and may, in particular, be useful for describing and detecting anisotropy of the particle process. This paper investigates the mean normal measure under the aspect of its determination from intersections, especially with lines or hyperplanes.
Wang and Pötzelberger (1997) derived an explicit formula for the probability that a Brownian motion crosses a one-sided piecewise linear boundary and used this formula to approximate the boundary crossing probability for general nonlinear boundaries. The present paper gives a sharper asymptotic upper bound of the approximation error for the formula, and generalizes the results to two-sided boundaries. Numerical computations are easily carried out using the Monte Carlo simulation method. A rule is proposed for choosing optimal nodes for the approximating piecewise linear boundaries, so that the corresponding approximation errors of boundary crossing probabilities converge to zero at a rate of O(1/n2).
We consider a real-valued random walk which drifts to -∞ and is such that the step distribution is heavy tailed, say, subexponential. We investigate the asymptotic tail behaviour of the distribution of the upwards first passage times. As an application, we obtain the exact rate of convergence for the ruin probability in finite time. Our result supplements similar theorems in risk theory.
The assumption of linear costs of observation usually leads to optimal stopping boundaries which are straight lines. For non-linear costs of observation, the question arises of how the shape of cost functions influences the shape of optimal stopping boundaries. In Irle (1987), (1988) it was shown that, under suitable assumptions on c, for the problem of optimal stopping (Wt + x)+ - c(s + t), t ∊ [0,∞), the optimal stopping boundary h(t) can be enscribed between k1/c'(t) and k2/c'(t) for some constants k1, k2. In this paper we find the exact asymptotic expansion h(t) = 1/(4c'(t))(1 + o(1)).
Multivariate subordinators are multivariate Lévy processes that are increasing in each component. Various examples of multivariate subordinators, of interest for applications, are given. Subordination of Lévy processes with independent components by multivariate subordinators is defined. Multiparameter Lévy processes and their subordination are introduced so that the subordinated processes are multivariate Lévy processes. The relations between the characteristic triplets involved are established. It is shown that operator self-decomposability and the operator version of the class Lm property are inherited from the multivariate subordinator to the subordinated process under the condition of operator stability of the subordinand.
In this paper, we study a maintenance model with general repair and two types of replacement: failure and preventive replacement. When the system fails a decision is made whether to replace or repair it. The repair degree that affects the virtual age of the system is assumed to be a random function of the repair-cost and the virtual age at failure time. The system can be preventively replaced at any time before failure. The objective is to find the repair/replacement policy minimizing the long-run expected average cost per unit time. It is shown that a generalized repair-cost-limit policy is optimal and the preventive replacement time depends on the virtual age of the system and on the length of the operating time since the last repair. Computational procedures for finding the optimal repair-cost limit and the optimal average cost are developed. This model includes many well-known models as special cases and the approach provides a unified treatment of a wide class of maintenance models.
Let {W(t), t ≥ 0} be a standard Brownian motion. For a positive integer m, define a Gaussian process Watanabe and Lachal gave some asymptotic properties of the process Xm(·), m ≥ 1. In this paper, we study the bounds of its moduli of continuity and large increments by establishing large deviation results.
The paper considers the superposition of modified Omori functions as a conditional intensity function for a point process model used in the exploratory analysis of earthquake clusters. For the examples discussed, the maximum likelihood estimates converge well starting from appropriate initial values even though the number of parameters estimated can be large (though never larger than the number of observations). Three datasets are subjected to different analyses, showing the use of the model to discover and study individual clustering features.
This paper uses the epidemic-type aftershock sequence (ETAS) point process model to study certain seismicity features of the Jiashi swarm of certain earthquakes, investigating in particular whether there is relative quiescence prior to the quite big events within the Jiashi sequence. The seven earthquake sequences studied occurred in the region of Jiashi, south of Tianshan Mountain, Xinjiang, China. The particular ETAS model that is developed is consistent with the reality of seismic activity. The various features of Jiashi swarm activity can be described as focusing in different stages. There is obvious precursory quiescence prior to most big events with Ms ≥ 6.0 within the Jiashi swarm. Thus, checking for relative quiescence can be use for earthquake prediction.
The paper shows that the use of both types of random noise, white noise and Poisson noise, can be justified when using an innovations approach. The historical background for this is sketched, and then several methods of whitening dependent time series are outlined, including a mixture of Gaussian white noise and a compound Poisson process: this appears as a natural extension of the Gaussian white noise model for the prediction errors of a non-Gaussian time series. A statistical method for the identification of non-linear time series models with noise made up of a mixture of Gaussian white noise and a compound Poisson noise is presented. The method is applied to financial time series data (dollar-yen exchange rate data), and illustrated via six models.
A point process procedure can be used to study reservoir-induced seismicity (RIS), in which the intensity function representing earthquake hazard is a combination of three terms: a constant background term, an ETAS (epidemic-type aftershock sequence) term for aftershocks, and a time function derived from observation of water levels of a reservoir. This paper presents the results of such a study of the seismicity in the vicinity of the Tarbela reservoir in Pakistan. Making allowance for changes in detection capability and the background seismicity related to tectonic activity, earthquakes of magnitude ≥ 2.0, occurring between May 1978 and January 1982 and whose epicentres were within 100 km of the reservoir, were used in this analysis. Several different intensities were compared via their Akaike information criterion (AIC) values relative to those of a Poisson process. The results demonstrate that the seismicity within 20 km of the reservoir correlates with water levels of the reservoir, namely, active periods occur about 250 days after the appearance of low water levels. This suggests that unloading the reservoir activates the seismicity beneath it. Seasonal variations of the seismicity in an area up to 100 km from the reservoir were also found, but these could not be adequately interpreted by an appropriate RIS mechanism.
The paper considers one of the standard processes for modeling returns in finance, the stochastic volatility process with regularly varying innovations. The aim of the paper is to show how point process techniques can be used to derive the asymptotic behavior of the sample autocorrelation function of this process with heavy-tailed marginal distributions. Unlike other non-linear models used in finance, such as GARCH and bilinear models, sample autocorrelations of a stochastic volatility process have attractive asymptotic properties. Specifically, in the infinite variance case, the sample autocorrelation function converges to zero in probability at a rate that is faster the heavier the tails of the marginal distribution. This behavior is analogous to the asymptotic behavior of the sample autocorrelations of independent identically distributed random variables.
The paper reviews the formulation of the linked stress release model for large scale seismicity together with aspects of its application. Using data from Taiwan for illustrative purposes, models can be selected and verified using tools that include Akaike's information criterion (AIC), numerical analysis, residual point processes and Monte Carlo simulation.
Consider an inhomogeneous Poisson process X on [0, T] whose unknown intensity function ‘switches' from a lower function g∗ to an upper function h∗ at some unknown point θ∗. What is known are continuous bounding functions g and h such that g∗(t) ≤ g(t) ≤ h(t) ≤ h∗(t) for 0 ≤ t ≤ T. It is shown that on the basis of n observations of the process X the maximum likelihood estimate of θ∗ is consistent for n →∞, and also that converges in law and in pth moment to limits described in terms of the unknown functions g∗ and h∗.
New metrics are introduced in the space of random measures and are applied, with various modifications of the contraction method, to prove existence and uniqueness results for self-similar random fractal measures. We obtain exponential convergence, both in distribution and almost surely, of an iterative sequence of random measures (defined by means of the scaling operator) to a unique self-similar random measure. The assumptions are quite weak, and correspond to similar conditions in the deterministic case.
The fixed mass case is handled in a direct way based on regularity properties of the metrics and the properties of a natural probability space. Proving convergence in the random mass case needs additional tools, such as a specially adapted choice of the space of random measures and of the space of probability distributions on measures, the introduction of reweighted sequences of random measures and a comparison technique.
A near-maximum is an observation which falls within a distance a of the maximum observation in an i.i.d. sample of size n. The asymptotic behaviour of the number Kn(a) of near-maxima is known for the cases where the right extremity of the population distribution function is finite, and where it is infinite and the right hand tail is exponentially small, or fatter than exponential. This paper completes the picture for thin tails, i.e., tails which decay faster than exponential. Limit theorems are derived and used to find the large-sample behaviour of the sum of near-maxima.
Computer simulations had suggested that the strategy that maximises the score on each turn in the dice game described by Roters (1998) may not be the optimal way to reach a given target in the shortest time. We give an analytical treatment, backed by numerical calculations, that finds the optimal strategy to reach such a target.