Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-06T14:27:38.584Z Has data issue: false hasContentIssue false

Hidden tail chains and recurrence equations for dependence parameters associated with extremes of stationary higher-order Markov chains

Published online by Cambridge University Press:  02 December 2024

Ioannis Papastathopoulos*
Affiliation:
University of Edinburgh
Adrian Casey*
Affiliation:
University of Edinburgh
Jonathan A. Tawn*
Affiliation:
Lancaster University
*
*Postal address: School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK.
*Postal address: School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK.
***Postal address: Department of Mathematics and Statistics, Lancaster University, Lancaster, LA1 4YF, UK. Email address: j.tawn@lancaster.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

We derive some key extremal features for stationary kth-order Markov chains that can be used to understand how the process moves between an extreme state and the body of the process. The chains are studied given that there is an exceedance of a threshold, as the threshold tends to the upper endpoint of the distribution. Unlike previous studies with $k>1$, we consider processes where standard limit theory describes each extreme event as a single observation without any information about the transition to and from the body of the distribution. Our work uses different asymptotic theory which results in non-degenerate limit laws for such processes. We study the extremal properties of the initial distribution and the transition probability kernel of the Markov chain under weak assumptions for broad classes of extremal dependence structures that cover both asymptotically dependent and asymptotically independent Markov chains. For chains with $k>1$, the transition of the chain away from the exceedance involves novel functions of the k previous states, in comparison to just the single value, when $k=1$. This leads to an increase in the complexity of determining the form of this class of functions, their properties, and the method of their derivation in applications. We find that it is possible to derive an affine normalization, dependent on the threshold excess, such that non-degenerate limiting behaviour of the process, in the neighbourhood of the threshold excess, is assured for all lags. We find that these normalization functions have an attractive structure that has parallels to the Yule–Walker equations. Furthermore, the limiting process is always linear in the innovations. We illustrate the results with the study of kth-order stationary Markov chains with exponential margins based on widely studied families of copula dependence structures.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust
Figure 0

Figure 1. Properties for each hidden tail chain for Examples 1–3 with $k=5$. Presented for each chain are pointwise 2.5% and 97.5% quantiles of the sampling distribution (shaded region), mean of the sampling distribution (dashed line), and one realization from the (hidden) tail chain (solid line). The copula of ${\boldsymbol{X}}_{0\,:\,k}$ used to derive the (hidden) tail chain comes from (a) the standard multivariate Gaussian copula with Toeplitz positive definite covariance matrix $\Sigma$ generated by the vector $(1, 0.70, 0.57, 0.47, 0.39, 0.33)$, and (b) the inverted logistic with $\alpha=\log({\textbf{1}}_{k+1}^\top\,\Sigma^{-1}\,{\textbf{1}}_{k+1})/\log k=0.27$ (here the value of the function $\text{mod}_k(t)$ is also highlighted on the mean function of the time series with numbers ranging from 0 to 4 for all t), (c) the logistic copula with $\alpha=0.32$, and (d) the Hüsler–Reiss copula with Toeplitz positive definite covariance matrix generated by the vector $(1, 0.9, 0.7, 0.5, 0.3, 0.1)$. The parameters for all copulas are chosen so that the coefficient of residual tail dependence $\eta$ [16] and the extremal coefficient $\theta$ [2] are equal for the copulas in panels (a) and (b), and for those in panels (c) and (d), respectively.

Figure 1

Figure 2. (a) Time series plot showing a single realization from the second-order Markov chain with asymmetric logistic dependence (46) initialized from the distribution of $X_0\mid X_0 > 9$. For this realization, there are three change-points $T_1^X$, $T_2^X$, and $T_3^X$, which are highlighted with crosses. (b) Scatter-plot of states $\{(X_{t-1},X_{t})\,:\, X_{t-1} > 9\}$ drawn from $10^3$ realizations of the Markov chain initialized from the distribution of $X_0\mid X_0>9$. Points for which $X_{t-2}<9$ and $X_{t-2}\geq 9$ are highlighted with grey crosses and black circles, respectively. (c) Scatter-plot of consecutive states $(X_{t-2},X_{t})$. Points for which $X_{t-1}<9$ and $X_{t-1}\geq 9$ are highlighted with grey crosses and black circles, respectively. (d) Scatter-plot of states $\{(\max(X_{t-2},X_{t-1}), X_t)\,:\, \max(X_{t-2}, X_{t-1}) > 9\}$ and line $X_t = c\,\max(X_{t-2},X_{t-1})$ with $c=\frac{1}{2}$ superposed. (e) Histogram of termination time $T^B$ obtained from $10^4$ realizations from the hidden tail chain. The Monte Carlo estimate of the mean of the distribution is $8.42$ and shown with a dashed vertical line. (f) Pointwise 2.5% and 97.5% quantiles of the sampling distribution (shaded region), mean of the sampling distribution (dashed line), and one realization from the hidden tail chain (solid line), conditioned on $T^B=8$. The value of the latent Bernoulli process $B_t$ is highlighted with a cross when $B_t=0$ and with a circle when $B_t=1$. For all plots presented, $\theta_{0}=\theta_{1}=\theta_{2}=\theta_{01}=\theta_{02} = 0.3$, $\theta_{012}=0.1$, and $\nu_{01}=\nu_{02}=\nu_{012}=0.5$.