To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The problem of discriminating between two Markov chains is considered. It is assumed that the common state space of the chains is finite and all the finite dimensional distributions are mutually absolutely continuous. The Bayes risk is expressed through large deviation probabilities for sums of random variables defined on an auxiliary Markov chain. The proofs are based on a large deviation theorem recently established by Z. Szewczak.
The paper considers the superposition of modified Omori functions as a conditional intensity function for a point process model used in the exploratory analysis of earthquake clusters. For the examples discussed, the maximum likelihood estimates converge well starting from appropriate initial values even though the number of parameters estimated can be large (though never larger than the number of observations). Three datasets are subjected to different analyses, showing the use of the model to discover and study individual clustering features.
This paper uses the epidemic-type aftershock sequence (ETAS) point process model to study certain seismicity features of the Jiashi swarm of certain earthquakes, investigating in particular whether there is relative quiescence prior to the quite big events within the Jiashi sequence. The seven earthquake sequences studied occurred in the region of Jiashi, south of Tianshan Mountain, Xinjiang, China. The particular ETAS model that is developed is consistent with the reality of seismic activity. The various features of Jiashi swarm activity can be described as focusing in different stages. There is obvious precursory quiescence prior to most big events with Ms ≥ 6.0 within the Jiashi swarm. Thus, checking for relative quiescence can be use for earthquake prediction.
The paper shows that the use of both types of random noise, white noise and Poisson noise, can be justified when using an innovations approach. The historical background for this is sketched, and then several methods of whitening dependent time series are outlined, including a mixture of Gaussian white noise and a compound Poisson process: this appears as a natural extension of the Gaussian white noise model for the prediction errors of a non-Gaussian time series. A statistical method for the identification of non-linear time series models with noise made up of a mixture of Gaussian white noise and a compound Poisson noise is presented. The method is applied to financial time series data (dollar-yen exchange rate data), and illustrated via six models.
For many years the modified Mercalli (MM) scale has been used to describe earthquake damage and effects observed at scattered locations. In the next stage of an analysis involving MM data, isoseismal lines based on the observations have been added to maps by hand, i.e. subjectively. However a few objective methods have been proposed (by e.g. De Rubeis et al., Brillinger, Wald et al. and Pettenati et al.). The work presented here develops objective methods further. In particular the ordinal character of the MM scale is specifically taken into account. Numerical smoothing is basic to the approach and methods involving splines, local polynomial regression and wavelets are illustrated. The approach also allows the inclusion of explanatory variables, for example site effects. The procedure is implemented for data from the 17 October 1989 Loma Prieta earthquake.
A time-series consisting of white noise plus Brownian motion sampled at equal intervals of time is exactly orthogonalized by a discrete cosine transform (DCT-II). This paper explores the properties of a version of spectral analysis based on the discrete cosine transform and its use in distinguishing between a stationary time-series and an integrated (unit root) time-series.
The paper considers one of the standard processes for modeling returns in finance, the stochastic volatility process with regularly varying innovations. The aim of the paper is to show how point process techniques can be used to derive the asymptotic behavior of the sample autocorrelation function of this process with heavy-tailed marginal distributions. Unlike other non-linear models used in finance, such as GARCH and bilinear models, sample autocorrelations of a stochastic volatility process have attractive asymptotic properties. Specifically, in the infinite variance case, the sample autocorrelation function converges to zero in probability at a rate that is faster the heavier the tails of the marginal distribution. This behavior is analogous to the asymptotic behavior of the sample autocorrelations of independent identically distributed random variables.
The paper reviews the formulation of the linked stress release model for large scale seismicity together with aspects of its application. Using data from Taiwan for illustrative purposes, models can be selected and verified using tools that include Akaike's information criterion (AIC), numerical analysis, residual point processes and Monte Carlo simulation.
The paper proposes a hidden semi-Markov model for breakpoint rainfall data that consist of both the times at which rain-rate changes and the steady rates between such changes. The model builds on and extends the seminal work of Ferguson (1980) on variable duration models for speech. For the rainfall data the observations are modelled as mixtures of log-normal distributions within unobserved states where the states evolve in time according to a semi-Markov process. For the latter, parametric forms need to be specified for the state transition probabilities and dwell-time distributions.
Recursions for constructing the likelihood are developed and the EM algorithm used to fit the parameters of the model. The choice of dwell-time distribution is discussed with a mixture of distributions over disjoint domains providing a flexible alternative. The methods are also extended to deal with censored data. An application of the model to a large-scale bivariate dataset of breakpoint rainfall measurements at Wellington, New Zealand, is discussed.
In this paper, a statistic that has been introduced to test for space-time correlation is considered in a time series context. The null hypothesis is white noise; the alternative is any kind of continuous functional dependence. For an autoregressive process close to the null hypothesis, a bound on the distance between the distribution of the statistic and a Poisson distribution is proved, using the Stein-Chen method. The main difficulty in the proof is that the dependence in the time series is not locally restricted. The result implies asymptotically certain discrimination for a reasonable choice of the thresholds.
Martin and Walker ((1997) J. Appl. Prob.34, 657–670) proposed the power-law ρ(v) = c|v|-β, |v| ≥ 1, as a correlation model for stationary time series with long-memory dependence. A straightforward proof of their conjecture on the permissible range of c is given, and various other models for long-range dependence are discussed. In particular, the Cauchy family ρ(v) = (1 + |v/c|α)-β/α allows for the simultaneous fitting of both the long-term and short-term correlation structure within a simple analytical model. The note closes with hints at the fast and exact simulation of fractional Gaussian noise and related processes.
For a stationary long-range dependent point process N(.) with Palm distribution P0, the Hurst index H ≡ sup{h : lim sup t→∞t-2h var N(0,t] = ∞} is related to the moment index κ ≡ sup{k : E0(Tk) < ∞} of a generic stationary interval T between points (E0 denotes expectation with respect to P0) by 2H + κ ≥ 3, it being known that equality holds for a stationary renewal process. Thus, a stationary point process for which κ < 2 is necessarily long-range dependent with Hurst index greater than ½. An extended example of a Wold process shows that a stationary point process can be both long-range count dependent and long-range interval dependent and have finite mean square interval length, i.e., E0(T2) < ∞.
In the statistical analysis of random sets, it is useful to have simple statistics that can be used to describe the realizations of these sets. The cumulants and several other standardized moments such as the correlation and second cumulant can be used for this purpose, but their estimators can be excessively variable if the most straightforward estimation strategy is used. Through exploitation of similarities between this estimation problem and a similar one for a point process statistic, two modifications are proposed. Analytical results concerning the effects of these modifications are found through use of a specialized asymptotic regime. Simulation results establish that the modifications are highly effective at reducing estimator standard deviations for Boolean models. The results suggest that the reductions in variance result from a balanced use of information in the estimation of the first and second moments, through eliminating the use of observations that are not used in second moment estimation.
In this paper we investigate the application of perfect simulation, in particular Coupling from the Past (CFTP), to the simulation of random point processes. We give a general formulation of the method of dominated CFTP and apply it to the problem of perfect simulation of general locally stable point processes as equilibrium distributions of spatial birth-and-death processes. We then investigate discrete-time Metropolis-Hastings samplers for point processes, and show how a variant which samples systematically from cells can be converted into a perfect version. An application is given to the Strauss point process.
We estimate the limiting availability of a system when the operating and repair times form a stationary bivariate sequence. These estimators are shown to be consistent and asymptotically normal under certain conditions. In particular, we estimate the limiting availability for a bivariate exponential autoregressive process.
Lamperti's transformation, an isometry between self-similar and stationary processes, is used to solve some problems of linear estimation of continuous-time, self-similar processes. These problems include causal whitening and innovations representations on the positive real line, as well as prediction from certain finite and semi-infinite intervals. The method is applied to the specific case of fractional Brownian motion (FBM), yielding alternate derivations of known prediction results, along with some novel whitening and interpolation formulae. Some associated insights into the problem of discrete prediction are also explored. Closed-form expressions for the spectra and spectral factorization of the stationary processes associated with the FBM are obtained as part of this development.
The paper compares non-parametric (design-based) and parametric (model-based) approaches to the analysis of data in the form of replicated spatial point patterns in two or more experimental groups. Basic questions for data of this kind concern estimating the properties of the underlying spatial point process within each experimental group, and comparing the properties between groups. A non-parametric approach, building on work by Diggle et. al. (1991), summarizes each pattern by an estimate of the reduced second moment measure or K-function (Ripley (1977)) and compares mean K-functions between experimental groups using a bootstrap testing procedure. A parametric approach fits particular classes of parametric model to the data, uses the model parameter estimates as summaries and tests for differences between groups by comparing fits with and without the assumption of common parameter values across groups. The paper discusses how either approach can be implemented in the specific context of a single-factor replicated experiment and uses simulations to show how the parametric approach can be more efficient when the underlying model assumptions hold, but potentially misleading otherwise.
We investigate the stability problem for a nonlinear autoregressive model with Markov switching. First we give conditions for the existence and the uniqueness of a stationary ergodic solution. The existence of moments of such a solution is then examined and we establish a strong law of large numbers for a wide class of unbounded functions, as well as a central limit theorem under an irreducibility condition.
The distribution of the interpoint distance process of a sequence of pairwise interaction point processes is considered. It is shown that, if the interaction function is piecewise-continuous, then the sequence of interpoint distance processes converges weakly to an inhomogeneous Poisson process under certain sparseness conditions. Convergence of the expectation of the interpoint distance process to the mean of the limiting Poisson process is also established. This suggests a new nonparametric estimator for the interaction function if independent identically distributed samples of the point process are available.
The geometric Brownian motion (Black–Scholes) model for the price of a risky asset stipulates that the log returns are i.i.d. Gaussian. However, typical log returns data shows a leptokurtic distribution (much higher peak and heavier tails than the Gaussian) as well as evidence of strong dependence. In this paper a subordinator model based on fractal activity time is proposed which simply explains these observed features in the data, and whose scaling properties check out well on various data sets.