To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main goal of Chapter 11 is to demonstrate how the theory developed in the previous chapters can be used in the study of various Markov models that give rise to Markov chains with asymptotically zero drift. Some of those models are popular in stochastic modelling: random walks conditioned to stay positive, state-dependent branching processes or branching processes with migration, stochastic difference equations. In contrast with the general approach discussed here, the methods available in the literature for investigation of these models are mostly model tailored. We also introduce some new models to which our approach is applicable. For example, we introduce a risk process with surplus-dependent premium rate, which converges to the critical threshold in the nett-profit condition. Furthermore, we introduce a new class of branching processes with migration and with state-dependent offspring distributions.
In Chapter 8 we consider a recurrent Markov chain possessing an invariant measure which is either probabilistic in the case of positive recurrence or σ-finite in the case of null recurrence. Our main aim here is to describe the asymptotic behaviour of the invariant distribution tail for a class of Markov chains with asymptotically zero drift proportional to 1/x. We start with a result which states that a typical stationary Markov chain with asymptotically zero drift always generates a heavy-tailed invariant distribution, which is very different from the case of Markov chains with asymptotically negative drift bounded away from zero. Then we develop techniques needed for deriving precise tail asymptotics of power type.
In Introduction we mostly discuss nearest neighbour Markov chains which represent one of the two classes of Markov chains whose either invariant measure in the case of positive recurrence or Green function in the case of transience is available in closed form. Closed form makes possible direct analysis of such Markov chains: classification, tail asymptotics of the invariant probabilities or Green function. This discussion sheds some light on what we may expect for general Markov chains. Another class is provided by diffusion processes which are also discussed in Introduction.
Chapters 4 and 5 of the present monograph deal comprehensively with limit theorems for transient Markov chains. In Chapter 4 we consider drifts of order 1/x, and prove limit theorems including convergence to a Γ-distribution and functional convergence to a Bessel process. We also study the asymptotic behaviour of the renewal measure, which is not straightforward as there is no law of large numbers owing to the comparable contributions of the drift and fluctuations.
In Chapter 10 we consider Markov chains with asymptotically constant (non-zero) drift. As shown in the previous chapter, the more slowly they to zero, the higher are the moments that should behave regularly at infinity. This is needed to make it possible to describe the asymptotic tail behaviour of the invariant measure. Therefore, it is not surprising that in the case of an asymptotically negative drift bounded away from zero we need to assume that the distribution of jumps converges weakly at infinity. This corresponds, roughly speaking, to the assumption that all moments behave regularly at infinity. In this chapter we slightly extend the notion of an asymptotically homogeneous Markov chain by allowing extended limiting random variables.
In Chapter 2 we introduce a classification of Markov chains with asymptotically zero drift, which relies on relations between the drift and the second moment of jumps, with many improvements on the results known in the literature. Additional assumptions are expressed in terms of truncated moments of higher orders and tail probabilities of jumps. Another, more important, contrast with previous results on recurrence/transience is the fact that we do not use concrete Lyapunov test functions (quadratic or similar). Instead, we construct an abstract Lyapunov function which is motivated by the harmonic function of a diffusion process with the same drift and diffusion coefficient.
Chapters 4 and 5 of the present monograph deal comprehensively with limit theorems for transient Markov chains. In Chapter 5 we consider drifts decreasing more slowly than 1/x and prove limit theorems including weak and strong laws of large numbers, convergence to normal distribution, functional convergence to Brownian motion, and asymptotic behaviour of the renewal measure.
Chapter 7 is the most conceptual part of the book. Our purpose here is to describe, without superfluous details, a change of measure strategy which allows us to transform a recurrent chain into a transient one, and vice versa. It is motivated by the exponential change of measure technique which goes back to Cramer. In the context of large deviations in collective risk theory, this technique allows us to transform a negatively drifted random walk into one with positive drift. Doob’s h-transform is the most natural substitute for an exponential change of measure in the context of Lamperti’s problem, that is, in the context of Markov chains with asymptotically zero drift.
Such transformations connect naturally previous chapters on asymptotic behaviour of transient chains with subsequent chapters, which are devoted to recurrent chains. A very important, in comparison with the classical Doob’s h-transform, the novelty consists in the fact that we use weight functions which are not necessarily harmonic, they are only asymptotically harmonic at infinity. The main challenge is to identify such functions under various drift scenarios.
Deep geological repositories are critical for the long-term storage of hazardous materials, where understanding the mechanical behavior of emplacement drifts is essential for safety assurance. This study presents a surrogate modeling approach for the mechanical response of emplacement drifts in rock salt formations, utilizing Gaussian processes (GPs). The surrogate model serves as an efficient substitute for high-fidelity mechanical simulations in many-query scenarios, including time-dependent sensitivity analyses and calibration tasks. By significantly reducing computational demands, this approach facilitates faster design iterations and enhances the interpretation of monitoring data. The findings indicate that only a few key parameters are sufficient to accurately reflect in-situ conditions in complex rock salt models. Identifying these parameters is crucial for ensuring the reliability and safety of deep geological disposal systems.
Longevity risk is threatening the sustainability of traditional pension systems. To deal with this issue, decumulation strategies alternative to annuities have been proposed in the literature. However, heterogeneity in mortality experiences in the pool of policyholders due to socio-economic classes generates inequity, because of implicit wealth transfers from the more disadvantaged to the wealthier classes. We address this issue in a Group Self-Annuitization (GSA) scheme in the presence of stochastic mortality by proposing a redistributive GSA scheme where benefits are optimally shared across classes. The expected present values of the benefits in a standard GSA scheme show relevant gaps across socio-economic groups, which are reduced in the redistributive GSA scheme. We explore sensitivity to pool size, interest rates and mortality assumptions.
Vehicle telematics provides granular data for dynamic driving risk assessment, but current methods often rely on aggregated metrics (e.g., harsh braking counts) and do not fully exploit the rich time-series structure of telematics data. In this paper, we introduce a flexible framework using continuous-time hidden Markov model (CTHMM) to model and analyse trip-level telematics data. Unlike existing methods, the CTHMM models raw time-series data without predefined thresholds on harsh driving events or assumptions about accident probabilities. Moreover, our analysis is based solely on telematics data, requiring no traditional covariates such as driver or vehicle characteristics. Through unsupervised anomaly detection based on pseudo-residuals, we identify deviations from normal driving patterns—defined as the prevalent behaviour observed in a driver’s history or across the population—which are linked to accident risk. Validated on both controlled and real-world datasets, the CTHMM effectively detects abnormal driving behaviour and trips with increased accident likelihood. In real data analysis, higher anomaly levels in longitudinal and lateral accelerations consistently correlate with greater accident risk, with classification models using this information achieving ROC-AUC values as high as 0.86 for trip-level analysis and 0.78 for distinguishing drivers with claims. Furthermore, the methodology reveals significant behavioural differences between drivers with and without claims, offering valuable insights for insurance applications, accident analysis, and prevention.
We develop general conditions for weak convergence of adaptive Markov chain Monte Carlo processes and this is shown to imply a weak law of large numbers for bounded Lipschitz continuous functions. This allows an estimation theory for adaptive Markov chain Monte Carlo where previously developed theory in total variation may fail or be difficult to establish. Extensions of weak convergence to general Wasserstein distances are established, along with a weak law of large numbers for possibly unbounded Lipschitz functions. Applications are applied to autoregressive processes in various settings, unadjusted Langevin processes, and adaptive Metropolis–Hastings.