Hostname: page-component-54dcc4c588-42vt5 Total loading time: 0 Render date: 2025-10-13T11:15:39.665Z Has data issue: false hasContentIssue false

Extremes of Gaussian random fields with nonadditive dependence structure

Published online by Cambridge University Press:  13 October 2025

Long Bai*
Affiliation:
Xi’an Jiaotong-Liverpool University
Krzysztof Dȩbicki*
Affiliation:
University of Wroclaw
Peng Liu*
Affiliation:
University of Essex
*
*Postal address: Department of Statistics and Actuarial Science, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China. Email: Long.Bai@xjtlu.edu.cn
**Postal address: Mathematical Institute, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland. Email: Krzysztof.Dębicki@math.uni.wroc.pl
***Postal address: School of Mathematics, Statistics and Actuarial Science, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK. Email: peng.liu@essex.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

We derive the exact asymptotics of $\mathbb{P} {\{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t})>u \}} \textrm{ as}\ u\to\infty,$ for a centered Gaussian field $X({\boldsymbol{t}}),\ {\boldsymbol{t}}\in \mathcal{A}\subset\mathbb{R}^n$, $n>1$ with continuous sample paths almost surely, for which $\arg \max_{\boldsymbol{t}\in {\mathcal{A}}} {\mathrm{Var}}(X(\boldsymbol{t}))$ is a Jordan set with a finite and positive Lebesgue measure of dimension $k\le n$ and its dependence structure is not necessarily locally stationary. Our findings are applied to derive the asymptotics of tail probabilities related to performance tables and chi processes, particularly when the covariance structure is not locally stationary.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $X({\boldsymbol{t}}),\ {\boldsymbol{t}}\in \mathbb{R}^n$ , $n>1$ be a centered Gaussian field with continuous sample paths. Due to its significance in the extreme value theory of stochastic processes, statistics, and applied probability, the distributional properties of

(1.1) \begin{eqnarray}\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t}),\end{eqnarray}

with a bounded set $\mathcal{A}\subset \mathbb{R}^n$ , were extensively investigated. While the exact distribution of (1.1) is known only for certain specific processes, the asymptotics of

(1.2) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t})>u \right \} \end{eqnarray}

as $u\to\infty$ was intensively analyzed; see, e.g., monographs by Adler & Taylor [Reference Adler and Taylor2], Azaïs & Wschebor [Reference Azaïs and Wschebor3], Berman [Reference Berman7], Ledoux [Reference Ledoux21], Lifshits [Reference Lifshits24], Piterbarg [Reference Piterbarg31], Talagrand [Reference Talagrand34], and references therein. As advocated therein, the set of points that maximize the variance $\mathcal{M}^\star\,:\!=\,\arg \max_{\boldsymbol{t}\in {\mathcal{A}}} {\mathrm{Var}}(X(\boldsymbol{t}))$ plays a crucial role in determining the exact asymptotics of (1.2). The best-understood cases involve situations where (i) $v_n(\mathcal{M}^\star) \in (0, \infty)$ , with $v_n$ representing the Lebesgue measure on $\mathbb{R}^n$ , and the field $X(\boldsymbol{t})$ is homogeneous on $\mathcal{M}^\star$ , or (ii) the set $\mathcal{M}^\star$ consists of distinct points. In case (i), one can argue that

\[\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t}) > u \right \} \sim \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{M}^\star}}X(\boldsymbol{t})>u \right \} \quad \textrm{as}\ u\to\infty.\]

For an intuitive description of case (ii), suppose that $\mathcal{M}^\star=\{\boldsymbol{t}^\star\}$ and ${\mathrm{Var}}(X(\boldsymbol{t}^\star))=1$ . Then, the interplay between the local behavior of the standard deviation and the correlation function in the vicinity of $\mathcal{M}^\star$ affects the asymptotics, which takes the form

(1.3) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t})>u \right \} \sim f(u)\mathbb{P} \left \{X(\boldsymbol{t}^\star)>u \right \} \quad \textrm{as } u\to\infty,\end{eqnarray}

where f(u) is some power function. An applicable assumption for obtaining the exact asymptotics as described in (1.3) is that, in the neighborhood of $\boldsymbol{t}^\star$ , both the standard deviation and the correlation function of $X(\boldsymbol{t})$ factorize according to the additive form

(1.4) \begin{eqnarray}1- \sigma(\boldsymbol{t}) \sim \sum_{j=1}^3 g_j( \bar{\boldsymbol{t}}^\star_j- \bar{\boldsymbol{t}}_j),\ 1-\ \mathrm{corr}(\boldsymbol{s},\boldsymbol{t})\sim \sum_{j=1}^3 h_j(\overline{\boldsymbol{s}}_j-\bar{\boldsymbol{t}}_j)\end{eqnarray}

as $\boldsymbol{s},\boldsymbol{t} \to \boldsymbol{t}^\star$ , where the coordinates of $\mathbb{R}^n$ are split into disjoint sets $\Lambda_1,\Lambda_2,\Lambda_3$ with $\Lambda_1\cup\Lambda_2\cup\Lambda_3=\{1,\ldots ,n\}$ , $\bar{\boldsymbol{t}}_j=(t_{i})_{i\in \Lambda_j},$ $j=1,2,3$ for $\boldsymbol{t} \in \mathbb{R}^n$ and $g_j,h_j$ are some homogeneous functions (see (2.7)) such that

(1.5) \begin{eqnarray}\lim_{\bar{\boldsymbol{t}}_1\to \overline{0}_1}\frac{g_1(\bar{\boldsymbol{t}}_1)}{ h_1(\bar{\boldsymbol{t}}_1)}=0,\qquad\lim_{\bar{\boldsymbol{t}}_2\to \overline{0}_2}\frac{g_2(\bar{\boldsymbol{t}}_2)}{ h_2(\bar{\boldsymbol{t}}_2)}\in (0,\infty),\qquad\lim_{\bar{\boldsymbol{t}}_3\to \overline{0}_3}\frac{g_3(\bar{\boldsymbol{t}}_3)}{ h_3(\bar{\boldsymbol{t}}_3)}=\infty.\end{eqnarray}

Under conditions (1.4)–(1.5), the function f introduced in (1.3) can be factorized as

\[f(u)=f_1(u)f_2(u)f_3(u),\]

where $f_i$ corresponds to $\Lambda_i$ and we have the following.

  • In the direction of the coordinates $\Lambda_1$ , the standard deviation function is relatively flat compared with the correlation function. Then, for the coordinates $\Lambda_1$ , a substantial neighborhood of $\mathcal{M}^*$ contributes to the asymptotics, and $f_1(u) \to \infty$ as $u \to \infty$ .

  • In the direction of the coordinates $\Lambda_2$ , the standard deviation function is comparable to the correlation function. Then, with respect of the coordinates $\Lambda_2$ , some relatively small neighborhood of $\mathcal{M}^*$ is important for the asymptotics, and $f_2(u)\to \mathcal{P}\in (1,\infty)$ as $u\to\infty$ .

  • In the direction of the coordinates $\Lambda_3$ , the standard deviation function decreases relatively fast compared with the correlation function. Then, for the coordinates $\Lambda_3$ , only the sole optimizer $t^\star$ is responsible for the asymptotics, and $f_3(u)\to 1$ as $u\to\infty$ . We refer the reader to Piterbarg [Reference Piterbarg31, Chapter 8] for more details.

Much less is known about the mixed cases when the set $\mathcal{M}^\star$ is a more general subset of $\mathcal{A}$ and/or when the local dependence structure of the analyzed process does not factorize according to the additive structure as in (1.4)–(1.5).

The exemptions available in the literature have been analyzed separately and address specific cases; see, e.g., [Reference Adler and Brown1, Reference Chan and Lai9Reference Dębicki, Hashorva and Ji11, Reference Liu26, Reference Piterbarg and Prisyazhnyuk33]. We would like to highlight a significant recent contribution by Piterbarg [Reference Piterbarg32], which focuses on the analysis of high excursion probabilities for centered Gaussian fields defined on a finite-dimensional manifold, where $\mathcal{M}^\star$ is a smooth submanifold. In this intuitively presented work, under the assumption that the correlation function of X is locally homogeneous, three scenarios for $\mathcal{M}^\star\varsubsetneq \mathcal{A}$ are examined: (i) the stationary-like case, (ii) the transition case, and (iii) the Talagrand case. Under the notation in (1.4)–(1.5), these scenarios correspond to $\Lambda_2=\Lambda_3=\emptyset$ for (i), $\Lambda_1=\Lambda_3=\emptyset$ for (ii), and $\Lambda_1=\Lambda_2=\emptyset$ for (iii).

The primary finding of this contribution, presented in Theorem 2.1, gives a unified result that provides the exact asymptotic behavior of (1.2) for a certain class of centered Gaussian fields for which $\mathcal{M}^\star$ is a $k_0\le n$ dimensional bounded Jordan set and the dependence structure of the entire field in the vicinity of $\mathcal{M}^\star$ does not necessarily follow the decompositions outlined in (1.4)–(1.5). In contrast to [Reference Piterbarg32], we allow mixed scenarios where all sets $\Lambda_1$ , $\Lambda_2$ , and $\Lambda_3$ can be nonempty simultaneously. Furthermore, we examine more general local structures of the correlation function than those presented in (1.4). More specifically, we relax the assumption that the correlation function is locally stationary for coordinates in $\Lambda_2, \Lambda_3$ by replacing $h_j(\overline{\boldsymbol{s}}_j - \bar{\boldsymbol{t}}_j)$ with $\tilde{h}_j(\overline{\boldsymbol{s}}_j, \bar{\boldsymbol{t}}_j)$ in (1.4). As the main technical challenge of this contribution, this generalization is particularly important for the examples discussed in Sections 3.1 and 3.2.

In Section 3 we present two examples that demonstrate the applicability of Theorem 2.1. Specifically, in Section 3.1 we derive the exact asymptotics of

(1.6) \begin{eqnarray}\mathbb{P} \left \{D_n^{\alpha}>u \right \} \quad \textrm{as}\ u\to\infty,\end{eqnarray}

where

\begin{eqnarray*}D_n^{\alpha}=\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^{\alpha} (\boldsymbol{t}),\qquad \boldsymbol{t}=(t_1,\ldots, t_n),\qquad \mathcal{S}_{n}=\{\boldsymbol{t}\in \mathbb{R}^n\colon 0\leq t_1\leq\cdots\leq t_n\leq 1\},\end{eqnarray*}

and

\begin{eqnarray*}Z^{\alpha}(\boldsymbol{t})=\sum_{i=1}^{n+1}a_i(B^{\alpha}_{i}(t_i)- B^{\alpha}_{i}(t_{i-1})),\end{eqnarray*}

with $t_0=0,t_{n+1}=1$ , constants $a_i\in(0,1]$ and $B^{\alpha}_{i},\ i=1,\ldots, n+1$ being mutually independent fractional Brownian motions with Hurst index $\alpha/2\in(0,1)$ . This random variable plays an important role in many areas of probability theory, and its analysis motivates the development of the theory presented in this paper. Due to its relation with some notions based on the performance table (see Section 3.1), the random variable $D_n^{1}$ emerges as a limit in several important quantities considered in the modeling of queues in series, totally asymmetric exclusion processes, or oriented percolation [Reference Baryshnikov6, Reference Glynn and Whitt16, Reference O’Connell29]. If $ a_i \equiv 1 $ then $ D_n^{1} $ has the same distribution as the largest eigenvalue of an n -dimensional Gaussian unitary ensemble (GUE) matrix [Reference Gravner, Tracy and Widom18]. If $\alpha = 1$ but the values of $a_i$ are not all the same, then the size of $\mathcal{M}^\star$ depends on the number of coordinates for which $a_i = 1$ (recall that we assume that $a_i \leq 1$ ). In this case, the correlation structure of the entire field is not locally homogeneous. Utilizing Theorem 2.1 allows us to derive the exact asymptotics of (1.6) as $u\to\infty$ for $\alpha\in(0,2)$ ; see Proposition 3.1.

Another application of Theorem 2.1 addresses the extremes of the class of chi processes $\chi(t),t\ge0$ , defined as

\[\chi(t)\,:\!=\,\sqrt{\sum_{i=1}^n X_i^2(t)}, \quad t\ge0,\]

where $X_i(t)$ , $i=1,\ldots ,n$ are mutually independent Gaussian processes. Due to their importance in statistics, asymptotic properties of high excursions of chi processes have attracted substantial interest. We refer to the classical work by Lindgren [Reference Lindgren25] and more recent contributions [Reference Bai and Kalaj5, Reference Hashorva and Ji19, Reference Liu and Ji27, Reference Liu and Ji28, Reference Piterbarg30, Reference Piterbarg32], which address nonstationary or noncentered cases. Importantly, $\sup\nolimits_{t\in [0,1]} \chi(t)$ can be rewritten as a supremum of some Gaussian field

\begin{align*}\sup\nolimits_{t\in [0,1]}\chi(t)=\sup\nolimits_{t\in [0,1], \sum_{i=1}^{n}v_i^2=1}X_i(t)v_i.\end{align*}

However, the common assumption on the models analyzed so far is that $X_i(t)$ are locally stationary, as in (1.4). In Section 3.2 we use Theorem 2.1 to examine the asymptotics of the probability for high exceedances of $\chi(t)$ in a model where the covariance structure of $X_i$ is not locally stationary; see Proposition 3.2 for more details.

The structure of the remainder of this paper is organized as follows. The concept and main steps of the proof of Theorem 2.1 are presented in Section 4. Detailed proofs of Theorem 2.1, Propositions 3.1, 3.2, and several auxiliary results can be found in the appendices.

2. Main Result

Let $X(\boldsymbol{t}),\ \boldsymbol{t}\in \mathcal{A}$ be an n-dimensional centered Gaussian field with continuous trajectories, variance function $\sigma^2(\boldsymbol{t})$ , and correlation function $r(\boldsymbol{s},\boldsymbol{t})$ , where $\mathcal{A}$ is a bounded set in $\mathbb{R}^n$ . Suppose that the maximum of the variance function $\sigma^2(\boldsymbol{t})$ over $\mathcal{A}$ is attained on a Jordan subset of $\mathcal{A}$ . Without loss of generality, let us assume that $\max_{\boldsymbol{t}\in \mathcal{A}} \sigma^2(\boldsymbol{t})=1$ . We denote by $\mathcal{M}^*$ the set $\{\boldsymbol{t}\in \mathcal{A}\colon \sigma^2(\boldsymbol{t})=1 \}$ .

Throughout this paper, all the operations on vectors are meant componentwise. For instance, for any given ${\bf{x}} = (x_1,\ldots, x_n)\in\mathbb{R}^n$ and ${\bf{y}} = (y_1,\ldots, y_n)\in \mathbb{R}^n$ , we write ${\bf{x}}{\bf{y}}=(x_1y_1,\ldots, x_ny_n)$ , $1/{\bf{x}}=(1/x_1,\dots, 1/x_n)$ for $x_i> 0,\ i=1,\dots, n$ , and ${\bf{x}}^{{\bf{y}}}=(x_1^{y_1},\dots, x_n^{y_n})$ for $x_i, y_i\geq 0, \ i=1,\dots, n$ . Moreover, we say that ${\bf{x}}\geq {\bf{y}}$ if $x_i\geq y_i,\ i=1,\dots, n$ .

Suppose that the coordinates of $\mathbb{R}^n$ are split into four disjoint sets $\Lambda_i,\ i=0,1,2,3$ with $k_i=\#\bigcup_{j=0}^i\Lambda_j, i=0,1,2,3$ (implying that $1\leq k_0\leq k_1\leq k_2\leq k_3 $ with $k_3=n$ ) and

\begin{eqnarray*} \tilde{\boldsymbol{t}}\,:\!=\,(t_i)_{i\in \Lambda_0},\bar{\boldsymbol{t}}_j\,:\!=\,(t_{i})_{i\in \Lambda_j}, \quad j=1,2,3,\end{eqnarray*}

in such a way that $\mathcal{M}^*=\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}=0, i\in \bigcup_{j=1,2,3}\Lambda_j\}$ . Let

\begin{align*}\mathcal{M}\,:\!=\,\{\tilde{\boldsymbol{t}}\colon \boldsymbol{t}\in\mathcal{A}, t_{i}=0, i\in \bigcup_{j=1,2,3}\Lambda_j\}\subset\mathbb{R}^{k_0}\end{align*}

denote the projection of $\mathcal M^*$ onto a $k_0$ -dimensional space. Note that $\mathcal{M}^*=\mathcal{A}$ if $\bigcup_{j=1,2,3}\Lambda_j=\emptyset$ . Sets $\Lambda_1,\Lambda_2,\Lambda_3$ play roles similar to those described in the introduction (see (A2) below), while $\Lambda_0$ is related to $\mathcal{M}^*$ via $\mathcal{M}$ .

Suppose that $\mathcal{M}$ is Jordan measurable with $v_{k_0}(\mathcal{M})\in (0,\infty)$ , where $v_{k_0}$ denotes the Lebesgue measure on $\mathbb{R}^{k_0}$ , and $\{(t_1,\dots, t_n)\colon \tilde{\boldsymbol{t}}\in\mathcal{M},\ t_i\in [0,\epsilon),\ i\in \bigcup_{j=1,2,3}\Lambda_j \}\subseteq \mathcal{A}\subseteq \{(t_1,\dots, t_n)\colon \tilde{\boldsymbol{t}}\in\mathcal{M},\ t_i\in [0,\infty),\ i\in \bigcup_{j=1,2,3}\Lambda_j \}$ for some $\varepsilon\in(0,1)$ small enough. Furthermore, we impose the following assumptions on the standard deviation and the correlation functions of X.

(A1) There exists a centered Gaussian random field $W(\boldsymbol{t}),\ \boldsymbol{t}\in[0,\infty)^n$ with continuous sample paths and a positive continuous vector-valued function $\boldsymbol{a}(\tilde{\boldsymbol{z}})=(a_1(\tilde{\boldsymbol{z}}),\ldots, a_n(\tilde{\boldsymbol{z}})),\ \tilde{\boldsymbol{z}}=(z_i)_{i\in\Lambda_0}\in \mathcal{M}$ satisfying

(2.1) \begin{eqnarray}\inf_{i=1,\dots ,n}\inf_{\tilde{\boldsymbol{z}}\in \mathcal{M}}a_i(\tilde{\boldsymbol{z}})>0\end{eqnarray}

such that

(2.2) \begin{eqnarray}\lim_{\delta\rightarrow 0}\sup\nolimits_{\boldsymbol{z}\in\mathcal{M}^*}\underset{\left\lvert \boldsymbol{s}-\boldsymbol{z} \right\rvert,\left\lvert \boldsymbol{t}-\boldsymbol{z} \right\rvert\leq \delta}{\sup\nolimits_{\boldsymbol{s},\boldsymbol{t}\in \mathcal{A}}}\left\lvert \frac{1-r(\boldsymbol{s},\boldsymbol{t})}{\mathbb{E}\left\{\left(W(\boldsymbol{a}(\tilde{\boldsymbol{z}})\boldsymbol{s})-W(\boldsymbol{a}(\tilde{\boldsymbol{z}})\boldsymbol{t})\right)^2\right\}}-1 \right\rvert=0,\end{eqnarray}

where the increments of W are homogeneous if we fix both $\bar{\boldsymbol{t}}_2$ and $ \bar{\boldsymbol{t}}_3$ , and there exists a vector $\boldsymbol{\alpha}=(\alpha_1,\dots, \alpha_n)$ with $\alpha_i\in (0,2],1\leq i\leq n$ such that, for any $u>0$ ,

(2.3) \begin{eqnarray}\mathbb{E}\{(W(u^{-2/\boldsymbol{\alpha}}\boldsymbol{s})-W(u^{-2/\boldsymbol {\alpha}}\boldsymbol{t}))^2\}=u^{-2}\mathbb{E}\{(W(\boldsymbol{s})- W(\boldsymbol{t}))^2\}.\end{eqnarray}

Moreover, there exist $d>0$ , $\mathcal{Q}_i>0$ , $i=1,2$ such that, for any $\boldsymbol{s},\boldsymbol{t}\in \mathcal{A}$ and $|\boldsymbol{s}-\boldsymbol{t}| < d$ ,

(2.4) \begin{eqnarray}\mathcal{Q}_1\sum_{i\in \bigcup_{j=0,1}\Lambda_j} \left\lvert s_i-t_i \right\rvert^{\alpha_i}\leq 1-r(\boldsymbol{s},\boldsymbol{t}) \leq \mathcal{Q}_2\sum_{i=1}^n \left\lvert s_i-t_i \right\rvert^{\alpha_i}.\end{eqnarray}

Furthermore, suppose that, for $\boldsymbol{s},\boldsymbol{t}\in\mathcal{A}$ and $\boldsymbol{s}\neq \boldsymbol{t}$ ,

(2.5) \begin{eqnarray}r(\boldsymbol{s},\boldsymbol{t})<1.\end{eqnarray}

(A2) Assume that

(2.6) \begin{eqnarray}\lim_{\delta\rightarrow 0}\sup\nolimits_{\boldsymbol{z}\in\mathcal{M}^*}\underset{\left\lvert \boldsymbol{z}-\boldsymbol{t} \right\rvert\leq\delta}{\sup\nolimits_{\boldsymbol{t} \in \mathcal{A}}}\left\lvert \frac{ 1- \sigma(\boldsymbol{t})}{\sum_{j=1}^3p_j(\tilde{\boldsymbol{z}})g_j(\bar{\boldsymbol{t}}_j)}- 1 \right\rvert=0,\end{eqnarray}

where $p_j(\tilde{\boldsymbol{t}}),\ \tilde{\boldsymbol{t}}\in[0,\infty)^{k_0}, j=1,2,3,$ are positive continuous functions and $g_j(\bar{\boldsymbol{t}}_j),\bar{\boldsymbol{t}}_j\in\mathbb{R}^{k_j-k_{j-1}}, j=1,2,3$ , are continuous functions satisfying $g_i(\bar{\boldsymbol{t}}_i)>0, \bar{\boldsymbol{t}}_j\neq \overline{\textbf{0}}_j, j=1,2,3.$ Moreover, we assume the following homogeneity property on the $g_j$ : there exist some $\boldsymbol{\beta}_j=(\beta_{i})_{i\in \Lambda_j}, j=1,2,3$ with $\beta_k > 0, k\in \bigcup_{j=1,2,3}\Lambda_j, $ such that, for any $u>0$ ,

(2.7) \begin{eqnarray}u g_j(\bar{\boldsymbol{t}}_j)=g_j(u^{1/{ \boldsymbol{\beta}_{j}}}\bar{\boldsymbol{t}}_{j}), \quad j=1,2,3.\end{eqnarray}

Moreover, with $\boldsymbol{\alpha}_j=(\alpha_{i})_{i\in \Lambda_j}, j=1,2,3$ ,

(2.8) \begin{align}\boldsymbol{\alpha}_1<\boldsymbol{\beta}_1, \quad \boldsymbol{\alpha}_2=\boldsymbol{\beta}_2, \quad\text{and}\quad \boldsymbol{\alpha}_3>\boldsymbol{\beta}_3.\end{align}

Assumption (A1), which includes (2.1)–(2.5), addresses the local dependence structure of the analyzed Gaussian field in a neighborhood of the set $\mathcal{M}^*$ of points that maximize the variance of X. The function $\boldsymbol{a}(\cdot)$ can be modified based on the location where the correlation is being tested. Property (2.3) refers to the self-similarity of $W(\cdot)$ with respect to each coordinate. In comparison to models previously discussed in the literature, the major novelty of (A1) lies in the fact that we do not assume homogeneity of the increments of $W(\cdot)$ with respect to the coordinates in $\Lambda_2\cup \Lambda_3$ . It enables us to examine the dependence structures of $X(\cdot)$ that extend beyond local stationarity. Assumption (A2), which includes (2.6)–(2.8), addresses the behavior of the variance function of $X(\cdot)$ in the vicinity of $\mathcal{M}^*$ . Property (2.8) straightforwardly corresponds to the three scenarios described in (1.5) in the introduction.

We next display the main result of this paper. To the end of this paper, $\Psi(\cdot)$ denotes the tail distribution of the standard normal random variable.

Theorem 2.1. Suppose that $X(\boldsymbol{t}),\ \boldsymbol{t}\in \mathcal{A}$ is an n-dimensional centered Gaussian random field satisfying (A1) and (A2). Then, as $u\to\infty$ ,

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{A}}X(\boldsymbol{t})>u \right \} \sim C u^{\sum_{i\in \Lambda_0\cup\Lambda_1}{2}/{\alpha_i}-\sum_{i\in \Lambda_1}{2}/{\beta_i}}\Psi(u),\end{eqnarray*}

where

\begin{eqnarray*}C= \int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i\in \Lambda_0\cup\Lambda_1}|a_i(\tilde{\boldsymbol{z}})|\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k_0}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right) {\mathrm{d}}\tilde{\boldsymbol{z}} \in (0,\infty),\end{eqnarray*}

with $\boldsymbol{a}_2(\tilde{\boldsymbol{z}})=(a_i(\tilde{\boldsymbol{z}}))_{i\in \Lambda_2}$ and

\begin{eqnarray*}\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}= \lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{k_1}}\mathbb{E}\left\{\sup\nolimits_{t_i\in[0,\lambda],\ i\in \bigcup_{j=0}^2\Lambda_j;\ t_i=0, i\in \Lambda_3}{\mathrm{e}}^{ \sqrt{2}W(\boldsymbol{t})-\sigma^2_W(\boldsymbol{t})-p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}\right\}.\end{eqnarray*}

Remark 1. The result in Theorem 2.1 is also valid if some $\Lambda_i,\ i=0,1,2,3$ are empty sets.

Next, let us consider a special case of Theorem 2.1 that focuses on the locally stationary structure of the correlation function of $X(\cdot)$ in the neighborhood of ${\mathcal{M}^*}$ , which partially generalizes Theorems 7.1 and 8.1 of [Reference Piterbarg31]. Suppose that

(2.9) \begin{align} a_i(\tilde{\boldsymbol{z}})\equiv a_i, \qquad \tilde{\boldsymbol{z}}\in \mathcal{M}, \ i=1,\dots,n,\qquad p_j(\tilde{\boldsymbol{z}})\equiv 1,\qquad \tilde{\boldsymbol{z}}\in \mathcal{M}, \ j=1,2,3, \end{align}
(2.10) \begin{align} \mathbb{E}\left\{\left(W(\boldsymbol{s})-W(\boldsymbol{t})\right)^2\right\}= \sum_{i=1}^{n}|s_i-t_i|^{\alpha_i}\quad\text{and}\quad g_j(\bar{\boldsymbol{t}}_j)=\sum_{i\in \Lambda_j} b_it_i^{\beta_i}, \quad j=1,2,3. \end{align}

These conditions, along with assumptions (A1) and (A2), lead to a natural set of models that satisfy an additive structure as in (1.4) and (1.5) and were considered by Piterbarg [Reference Piterbarg31]. We note that in [Reference Piterbarg31] the special cases of purely homogeneous fields, characterized by a constant variance function where $\Lambda_1 = \Lambda_2 = \Lambda_3 = \emptyset$ , and fields that have a unique maximizer of the variance function ( $\Lambda_0 = \emptyset$ ), are analyzed separately. In the proposition below, we allow mixed scenarios where all sets $\Lambda_0, \Lambda_1, \Lambda_2, \Lambda_3 \neq \emptyset$ .

Let $\Gamma(x)=\int_{0}^{\infty} s^{x-1}{\mathrm{e}}^{-s} \,{\mathrm{d}} s$ for $x>0$ . For $\alpha\in (0,2]$ , $\lambda>0$ and $b>0$ , we define Pickands and Piterbarg constants as

(2.11) \begin{align}\mathcal{H}_{B^{\alpha}}[0,\lambda] &=\mathbb{E}\left\{\sup\nolimits_{t\in [0,\lambda]} {\mathrm{e}}^{ \sqrt{2}B^\alpha (t)-t^\alpha}\right\}, \qquad \mathcal{H}_{B^{\alpha}}=\lim_{\lambda\rightarrow\infty}\frac{\mathcal{H}_{B^{\alpha}}[0,\lambda]}{\lambda},\nonumber\\\mathcal{P}_{B^{\alpha}}^{b}[0,\lambda]&=\mathbb{E}\left\{\sup\nolimits_{t\in [0,\lambda]} {\mathrm{e}}^{ \sqrt{2}B^\alpha (t)-(1+b)t^\alpha}\right\}, \qquad\mathcal{P}_{B^{\alpha}}^{b}=\lim_{\lambda\rightarrow\infty}\mathcal{P}_{B^{\alpha}}^{b}[0,\lambda],\end{align}

where $B^{\alpha}$ is a standard fractional Brownian motion with zero mean and covariance

\begin{align*}{\rm cov}(B^{\alpha}(s),B^{\alpha}(t))=\frac{|t|^\alpha+|s|^{\alpha}-|t-s|^{\alpha}}{2}, \quad s,t\geq 0.\end{align*}

For properties of Pickands and Piterbarg constants, we refer the reader to [Reference Piterbarg31] and the references listed therein.

The following proposition straightforwardly follows from Theorem 2.1.

Proposition 2.1. Under the assumptions of Theorem 2.1, if (2.9)–(2.10) hold, then

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{A}}X(\boldsymbol{t})>u \right \} \sim C u^{\sum_{i\in \Lambda_0\cup\Lambda_1}{2}/{\alpha_i}-\sum_{i\in \Lambda_1}{2}/{\beta_i}}\Psi(u),\end{eqnarray*}

where

\begin{eqnarray*}C=v_{k_0}(\mathcal{M})\left(\prod_{i\in \Lambda_0\cup\Lambda_1} a_i\mathcal{H}_{B^{\alpha_i}}\right) \left(\prod_{i\in\Lambda_1}b_i^{-1/\beta_i}\Gamma\bigg(\frac{1}{\beta_i}+1\bigg)\right)\prod_{i\in \Lambda_2} \mathcal{P}_{B^{\alpha_i}}^{a_i^{-\beta_i}b_i}.\end{eqnarray*}

3. Applications

In this section we illustrate our main results by applying Theorem 2.1 to two classes of Gaussian fields with nonstandard structures of their correlation function.

3.1. The performance table and the largest eigenvalue of the GUE matrix

Let

(3.1) \begin{eqnarray}Z^{\alpha}(\boldsymbol{t})\,:\!=\,\sum_{i=1}^{n+1}a_i \left(B^{\alpha}_{i}(t_i)-B^{\alpha}_{i}(t_{i-1})\right),\qquad \boldsymbol{t}=(t_1,\ldots, t_{n}),\end{eqnarray}

where $t_0=0,t_{n+1}=1$ and $B^{\alpha}_{i},\ i=1,\ldots, n+1$ are mutually independent fractional Brownian motions with Hurst index $\alpha/2\in(0,1)$ and $a_i>0,\ i=1,\ldots, n+1$ . We are interested in the asymptotics of

(3.2) \begin{eqnarray}\mathbb{P} \left \{D_n^\alpha>u \right \} =\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^\alpha(\boldsymbol{t})>u \right \}\end{eqnarray}

for large u, where $\mathcal{S}_{n}=\{\boldsymbol{t}\in \mathbb{R}^n\colon 0\leq t_1\leq\cdots\leq t_n\leq 1\}$ . Without loss of generality, we assume that $\max_{i=1,\dots, n+1} a_i=1$ .

The random variable $D_n^\alpha$ arises in many problems that are important in both theoretical and applied probability. Specifically, it is closely related to the notion of the performance table. More precisely, following [Reference Baryshnikov6], let $\boldsymbol{w}=(w_{ij}), i,j\geq 1$ be a family of independent random values indexed by the integer points of the first quarter of the plane. A monotonous path $\pi$ from (i,j) to $(i',j'), i\leq i'; j\leq j'; i,j,i',j'\in\mathbb{N}$ is a sequence $(i,j)=(i_0,j_0), (i_1,j_1),\ldots, (i_l,j_l)=(i',j')$ of length $k=i'+j'-i-j+1$ , such that all lattice steps $(i_k,j_k)\rightarrow (i_{k+1},j_{k+1})$ are of size one and (consequently) go to the north or the east. The weight $\boldsymbol{w}(\pi)$ of such a path is the sum of all entries of the array $\boldsymbol{w}$ along the path. We define the performance table $l(i,j), i,j \in \mathbb{N}$ as the array of largest path weights from (1, 1) to (i, j), that is,

\begin{eqnarray*}l(i,j)=\max_{\pi \ \text{from}\ (1,1)\ \text{to}\ (i,j)} \boldsymbol{w}(\pi).\end{eqnarray*}

If $\text{Var}(w_{ij})\equiv v>0$ and $\mathbb{E}\left\{w_{ij}\right\}\equiv e$ for all i, j, then

\begin{eqnarray*}D_{n,k}\,:\!=\,\frac{l(n+1,k)-ke}{\sqrt{k v}}\end{eqnarray*}

converges in law as $k\rightarrow\infty$ to $D_n^1$ with $a_i\equiv 1$ ; see [Reference Baryshnikov6]. Notably, $D_n^1$ has a queueing interpretation, e.g. in the analysis of departure times from queues in series [Reference Glynn and Whitt16] and plays an important role in the analysis of noncolliding Brownian motions [Reference Grabiner17]. Moreover, as observed in [Reference Baryshnikov6], if $a_i\equiv 1$ then $D^1_n$ has the same law as the largest eigenvalue of an n-dimensional GUE random matrix; see [Reference O’Connell29].

Let

(3.3) \begin{eqnarray}\mathcal{N}=\{i\colon a_i=1,\ i=1,\ldots, n+1\},\,\,\mathcal{N}^c=\{i\colon a_i < 1,\ i=1,\ldots, n+1\},\,\, \mathfrak{m}=\#\mathcal{N}\!,\end{eqnarray}

where $\#\mathcal{N}$ denotes the cardinal number of $\mathcal{N}$ . For $k^*=\max\{i\in\mathcal{N}\}$ and ${\bf{x}}=(x_1,\ldots, x_{k^*-1}, x_{k^*+1},\ldots, x_{n+1})$ , we define

(3.4) \begin{align}W({\bf{x}})&=\frac{\sqrt{2}}{2}\sum_{i\in\mathcal{N}}(B_i(s_i({\bf{x}}))-\widetilde{B}_i(s_{i-1}({\bf{x}})))+\frac{\sqrt{2}}{2}\sum_{i\in\mathcal{N}^c}a_i\left(B_{i}(s_i({\bf{x}}))-B_i(s_{i-1}({\bf{x}}))\right)\!,\end{align}

where $B_i, \widetilde{B}_i$ are independent standard Brownian motions and

\begin{eqnarray*}s_i({\bf{x}})=\left\{\begin{array}{l@{\quad}l}x_i& \text{if}\ i\in \mathcal{N}\ \text{and}\ i < k^*,\\\displaystyle\sum_{j=\max\{k\in\mathcal{N}:k < i\}}^{i}x_j& \text{if}\ i\in\mathcal{N}^c\ \text{and}\ i < k^*,\\\displaystyle\sum_{j=i+1}^{n+1}x_j& \text{if}\ i\geq k^*,\end{array}\right.\end{eqnarray*}

with the convention that $\max\emptyset=1$ .

For $\mathfrak{m}$ given in (3.3), let

(3.5) \begin{eqnarray}\mathcal{H}_W\,:\!=\,\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{\mathfrak{m}-1}}\mathbb{E}\bigg\{\sup\nolimits_{{\bf{x}}\in[0,\lambda]^n}{\mathrm{e}}^{ \sqrt{2}W({\bf{x}})-(\underset{i\neq k^*}{\sum_{i=1}^{n+1}}x_i)}\bigg\}.\end{eqnarray}

It appears that, for $\alpha=1$ and $\mathfrak{m} < n+1$ , the field $Z^1$ satisfies (A1) with W as given in (3.4). Notably, it has stationary increments with respect to the coordinates $\mathcal{N}$ while the increments of W are not stationary with respect to the coordinates $\mathcal{N}^c$ ; see (B.11) in the proof of the following proposition. Moreover, we have $\Lambda_0=\mathcal{N}$ , $\Lambda_1=\emptyset$ , $\Lambda_2=\mathcal{N}^c$ , $\Lambda_3=\emptyset$ .

Proposition 3.1. For $Z^{\alpha}$ defined in (3.1), we have, as $u\rightarrow\infty$ ,

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^\alpha(\boldsymbol{t}) > u \right \} \sim\left\{\begin{array}{ll} \displaystyle{C u^{(({2}/{\alpha})-1)n}\Psi\left(\frac{u}{\sigma_*}\right)}, & \alpha\in(0,1),\\[8pt] \displaystyle\frac{1}{(\mathfrak{m}-1)!}\mathcal{H}_Wu^{2(\mathfrak{m}-1)}\Psi(u),& \alpha=1,\\[8pt] \mathfrak{m}\Psi(u), & \alpha\in(1,2),\end{array}\right.\end{eqnarray*}

where $\sigma_*=(\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})^{{(1-\alpha)}/{2}}$ and

\begin{align*} C&= \left( \mathcal{H}_{B^{\alpha}}\right)^n\left(\prod_{i=1}^n \left(a_i^2+a_{i+1}^2\right)^{{1}/{\alpha}}\right)2^{(1-{1}/{\alpha})n} \left(\frac{\pi}{\alpha(1-\alpha)}\right)^{{n}/{2}} \\ &\quad\times\sigma_{*}^{{-(\alpha-2)^2 n}/{(1-\alpha)\alpha}}\left(\sum_{j=1}^{n+1}\prod_{i\neq j}a_i^{{2}/{(\alpha-1)}}\right)^{-{1}/{2}}.\end{align*}

Remark 3.1.

  1. (i) If $1\leq \mathfrak{m}\leq n$ , then $1 \le \mathcal{H}_W\leq n^{\mathfrak{m}-1}\prod_{i\in\mathcal{N}^c}(1+{2n}/{(1-a_i^2)}).$

  2. (ii) If $\mathfrak{m}=n+1$ , then $\mathcal{H}_W=1$ .

To prove Proposition 3.1, we distinguish three scenarios based on the value of $\alpha$ : $\alpha \in (0,1)$ , $\alpha = 1$ , and $\alpha \in (1,2)$ . The cases of $\alpha\in(0,1)$ and $\alpha\in(1,2)$ can be derived from [Reference Piterbarg31, Theorem 8.2], where the maximum variance function of $Z^1$ is attained at a finite number of points. The case where $\alpha=1$ fundamentally differs from the abovementioned cases. This is because, depending on the values of $a_i$ , the maximum of the variance function of $Z^1$ is attained at a set $\Lambda_{0}$ that has a positive Lebesgue measure of dimension $\mathfrak{m}-1$ , with $\mathfrak{m}$ defined in (3.3), and the corresponding correlation function is not locally stationary in the vicinity of $\Lambda_{0}$ . We apply Theorem 2.1 in this case. The detailed proofs of Proposition 3.1 and Remark 3.1 are postponed to Appendix B and Appendix C, respectively.

3.2. Chi processes

Consider a chi process

(3.6) \begin{align} \chi(t)\,:\!=\,\sqrt{\sum_{i=1}^{n}X_i^2(t)}, \quad t\in [0,1], \end{align}

where $X_i(t)$ , $i=1,\ldots ,n$ , are independent and identically distributed (i.i.d.) copies of $\{X(t), t\in[0,1]\}$ , a centered Gaussian process with almost surely (a.s.) continuous trajectories. Suppose that

(3.7) \begin{align}\sigma_X(t)=\frac{1}{1+bt^\alpha}, \quad t\in [0,1] \text{ for } b>0\end{align}

and

(3.8) \begin{align}1\ \mathrm{corr}(X(s),X(t))\sim a{\mathrm{Var}}(Y(t)-Y(s)), \quad s,t\to 0 \text{ for }a>0,\end{align}

where $\{Y(t),t\geq 0\}$ is a centered Gaussian process with a.s. continuous trajectories satisfying:

(B1) $\{Y(t),t\geq 0\}$ is self-similar with index $\alpha/2\in (0,1)$ (i.e. for all $r>0$ , $\{Y(rt),t\geq 0\}\buildrel d \over =\{r^{\alpha/2}Y(t),t\geq 0\},$ where $\buildrel d \over =$ means the equality of finite dimensional distributions) and $\sigma_Y(1)=1$ ;

(B2) there exist $c_Y>0$ and $\gamma \in [\alpha, 2]$ such that

\begin{align*}{\mathrm{Var}}(Y(1)-Y(t))\sim c_Y|1-t|^\gamma, \quad t\uparrow 1.\end{align*}

The class of processes that satisfy conditions (B1) and (B2) includes fractional Brownian motions, bifractional Brownian motions (see, e.g., [Reference Houdré and Villa20, Reference Lei and Nualart22]), subfractional Brownian motions (see, e.g., [Reference Bojdecki, Gorostiza and Talarczyk8, Reference Dzhaparidze and Zanten14]), dual-fractional Brownian motions (see, e.g., [Reference Li and Shao23]) and the time average of fractional Brownian motions (see, e.g., [Reference Dębicki13, Reference Li and Shao23]).

For a Gaussian process Y satisfying (B1) and (B2) and $b>0$ , we introduce a generalized Piterbarg constant

(3.9) \begin{align} \mathcal{P}_{Y}^{b}=\lim_{S\to\infty}\mathbb{E}\left\{\sup\nolimits_{t\in [0,S]}{\mathrm{e}}^{\sqrt{2}Y(t)-(1+b)t^{\alpha}}\right\}\in (0, \infty). \end{align}

We refer the reader to [Reference Dębicki13] for the properties of this constant.

The literature on the asymptotics of

(3.10) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{t\in[0,1]}\chi(t)>u \right \}\end{eqnarray}

as $u\to\infty$ , focuses on the scenario where Y in (3.8) is a fractional Brownian motion. Then, $1-r(s,t)\sim a|t-s|^\alpha$ as $s,t\to 0$ for some $\alpha \in (0,2]$ , which implies that the correlation function of X is locally homogeneous at 0; see e.g. [Reference Hashorva and Ji19, Reference Liu and Ji28, Reference Piterbarg30, Reference Piterbarg32]. In the following proposition, Y represents a general self-similar Gaussian process that satisfies conditions (B1) and (B2). This framework allows for locally nonhomogeneous structures of the correlation function of X, which have not been previously explored in the literature.

The idea of deriving the asymptotics of (3.10) is based on transforming it into the supremum of a Gaussian random field over a sphere; see [Reference Fatalov15, Reference Piterbarg30, Reference Piterbarg32]. More specifically, we use the fact that

\begin{align*}\sup\nolimits_{t\in [0,1]}\chi(t)=\sup\nolimits_{t\in [0,1], \sum_{i=1}^{n}v_i^2=1} \sum_{i=1}^nX_i(t)v_i.\end{align*}

Next, we transform the Euclidean coordinates into spherical coordinates,

\begin{align*}v_1(\boldsymbol{\theta})=\cos\!(\theta_1), \quad v_2(\boldsymbol{\theta})=\sin\!(\theta_1)\cos\!(\theta_2), \dots,v_n(\boldsymbol{\theta})=\prod_{i=1}^{n-1}\sin\!(\theta_i),\end{align*}

where $\boldsymbol{\theta}=(\theta_1,\dots,\theta_{n-1})$ and $\boldsymbol{\theta}\in [0,\pi]^{n-2}\times [0,2\pi)$ . For

(3.11) \begin{eqnarray}Z(\boldsymbol{\theta},t)=\sum_{i=1}^{n} X_i(t) v_i(\theta), \quad \boldsymbol{\theta}\in [0,\pi]^{n-2}\times [0,2\pi), \, t\in [0,1],\end{eqnarray}

we have

\begin{align*}\sup\nolimits_{t\in [0,1]}\chi(t)=\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)\quad\text{with } E=[0,\pi]^{n-2}\times [0,2\pi)\times[0,1].\end{align*}

Consequently,

(3.12) \begin{align}\mathbb{P}\left(\sup\nolimits_{t\in [0,1]}\chi(t)>u\right)=\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)>u\right)\!.\end{align}

Then, it appears that the Gaussian field Z satisfies the assumptions of Theorem 2.1 with W in (2.2) and (2.3) given by

\begin{align*}W(\boldsymbol{\theta}, t)=\sum_{i=1}^{n-1}B_i^2(\theta_i)+\sqrt{a}Y(t), \quad \boldsymbol{\theta}\in \mathbb{R}^{n-1}\times \mathbb{R}^+,\end{align*}

where $B_i^2$ are independent fractional Brownian motions with index 2 and Y is a self-similar Gaussian process as described in (3.8) that is independent of $B_i^2$ . Importantly, if Y is not a fractional Brownian motion then W, as defined above, does not have stationary increments with respect to the coordinate t. Moreover, $\Lambda_0=\{1,\dots, n-1\}$ , $\Lambda_1=\emptyset$ , $\Lambda_2=\{n\}$ , $\Lambda_3=\emptyset$ . An application of Theorem 2.1 leads to the following result.

Proposition 3.2. For $\chi$ defined in (3.6) with X satisfying (3.7) and (3.8), we have

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{t\in[0,1]}\chi(t)>u \right \} \sim \frac{2^{{(3-n)}/{2}}\sqrt{\pi}}{\Gamma(n/2)}\mathcal{P}_{Y}^{a^{-1}b}u^{n-1}\Psi(u), \quad u\to\infty,\end{eqnarray*}

where $P_Y^{a^{-1}b}$ is defined in (3.9).

The proof of Proposition 3.2 is postponed to Appendix D.

4. Proof of Theorem 2.1

The idea of the proof of Theorem 2.1 is based on Piterbarg’s methodology [Reference Piterbarg31] combined with some refinements developed in [Reference Dębicki, Hashorva and Liu12]. The proof is divided into three steps. In the first step, we demonstrate that the supremum of X(t) over $\mathcal{A}$ is primarily achieved on a specific subset. In the second step, we divide this subset into smaller hyperrectangles with sizes adjusted according to u. Then, we uniformly derive the tail probability asymptotics on each hyperrectangle. This part of the proof utilizes an adapted version of Theorem 2.1 from [Reference Dębicki, Hashorva and Liu12] (see Lemma 4.1 in Section 4.1). We first scale the parameter set appropriately to ensure that the rescaled hyperrectangles are independent of u. As a result, the scaled processes, denoted by $ X_{u,l}(\cdot) $ , depend on both u and the position of the hyperrectangle l (see (4.5) in conjunction with (4.6)). Then we apply Lemma 4.1 for $X_{u,l}(\cdot)$ . The upper bound for the analyzed asymptotic probability is the summation of the asymptotics over the corresponding hyperrectangles. For the lower bound, we apply the Bonferroni inequality, where the additional summation of the double high exceedance probabilities of X over all pairs of the hyperrectangles is tightly bounded. Finally, the third step focuses on summing the asymptotics from the second step to obtain the overall asymptotics.

We denote by $\mathbb{Q}$ and $\mathbb{Q}_i$ , for $i=1,2,3,\dots$ , positive constants that may vary from line to line.

4.1. An adapted version of Theorem 2.1 in [Reference Dębicki, Hashorva and Liu12]

In this subsection we present a modified version of Theorem 2.1 from [Reference Dębicki, Hashorva and Liu12], which is crucial for proving Theorem 2.1. Let $X_{u,\boldsymbol{l}}(\boldsymbol{t}),\, \boldsymbol{t}\in E\subset \mathbb{R}^n, \boldsymbol{l}\in K_u \subset \mathbb{R}^m, m\geq 1$ be a family of Gaussian random fields with variance 1, where $E\subset \mathbb{R}^n$ is a compact set containing $\textbf{0}$ and $K_u\neq \emptyset$ . Moreover, assume that $g_{u,\boldsymbol{l}}, \boldsymbol{l}\in K_u$ is a series of functions over E and $u_{\boldsymbol{l}}, \boldsymbol{l}\in K_u$ are positive functions of u satisfying $\lim_{u\rightarrow\infty}\inf_{\boldsymbol{l}\in K_u}u_{\boldsymbol{l}}=\infty$ . To obtain the uniform asymptotics of

\begin{align*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}\frac{X_{u,\boldsymbol{l}}(\boldsymbol{t})} {1+g_{u,\boldsymbol{l}}(\boldsymbol{t})}>u_{\boldsymbol{l}} \right \} \end{align*}

with respect to $\boldsymbol{l}\in K_u$ , we impose the following assumptions.

(C1) There exists a function g such that

\begin{eqnarray*}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\sup\nolimits_{\boldsymbol{t}\in E}\left|u_{\boldsymbol{l}}^2g_{u,\boldsymbol{l}}(\boldsymbol{t})-g(\boldsymbol{t})\right|=0.\end{eqnarray*}

(C2) There exists a centered Gaussian random field $V(\boldsymbol{t}), \boldsymbol{t}\in E$ with $V(\textbf{0})=0$ such that

\begin{eqnarray*}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\sup\nolimits_{\boldsymbol{s},\boldsymbol{t}\in E}\left|u_{\boldsymbol{l}}^2{\mathrm{Var}}(X_{u,\boldsymbol{l}} (\boldsymbol{t})-X_{u,\boldsymbol{l}}(\boldsymbol{s}))-2{\mathrm{Var}}(V(\boldsymbol{t})-V(\boldsymbol{s}))\right|=0.\end{eqnarray*}

(C3) There exist $\gamma\in (0,2]$ and $\mathcal{C}>0$ such that, for sufficiently large u,

\begin{eqnarray*}\sup\nolimits_{\boldsymbol{l}\in K_u}\sup\nolimits_{\boldsymbol{s}\neq \boldsymbol{t}, \boldsymbol{s},\boldsymbol{t}\in E}u_{\boldsymbol{l}}^2\frac{{\mathrm{Var}}(X_{u,\boldsymbol{l}}(\boldsymbol{t})-X_{u,\boldsymbol{l}}(\boldsymbol{s}))}{\sum_{i=1}^n|s_i-t_i|^{\gamma}}\leq \mathcal{C}.\end{eqnarray*}

At the beginning of Section 4, we noted that in the proof of Theorem 2.1 we would determine the precise asymptotics of the suprema for a collection of appropriately scaled Gaussian fields $ X_{u,l} $ . The set of assumptions (C1)–(C3) is accommodated to these scaled processes. In Section 4.2 we demonstrate that (A1) for X guarantees that (C2) and (C3) are uniformly satisfied for all $X_{u,l}$ . In addition, (A2) ensures that (C1) holds.

Lemma 4.1. Let $X_{u,\boldsymbol{l}}(\boldsymbol{t}), \boldsymbol{t}\in E\subset \mathbb{R}^n, \boldsymbol{l}\in K_u$ be a family of Gaussian random fields with variance 1, $g_{u,\boldsymbol{l}}, \boldsymbol{l}\in K_u$ be functions defined on E and $u_{\boldsymbol{l}}, \boldsymbol{l}\in K_u$ be positive constants. If (C1)–(C3) are satisfied then

\begin{eqnarray*}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\left\lvert \frac{\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}({X_{u,\boldsymbol{l}}(\boldsymbol{t})}/ {(1+g_{u,l}(\boldsymbol{t}))})>u_{\boldsymbol{l}} \right \} }{\Psi(u_{\boldsymbol{l}})}- \mathcal{P}^{g}_V\left(E\right) \right\rvert=0,\end{eqnarray*}

where

\begin{eqnarray*}\mathcal{P}^g_V\left(E\right)=\mathbb{E}\left\{\sup\nolimits_{\boldsymbol{t}\in E}{\mathrm{e}}^{ \sqrt{2}V(\boldsymbol{t})-\sigma_V^2(\boldsymbol{t})-g(\boldsymbol{t})}\right\}.\end{eqnarray*}

4.2. Proof of Theorem 2.1

To simplify notation, we assume, without loss of generality, that $\Lambda_0=\{1,\dots,k_0\}$ and $\Lambda_i=\{k_{i-1}+1,\dots, k_i\}$ for $i=1,2,3$ . Thus, we have $\mathcal{M}^*=\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}=0,\ i=k_0+1,\ldots, n\}$ and $\mathcal{M}=\{\tilde{\boldsymbol{t}}\colon \boldsymbol{t}\in\mathcal{A}, t_{i}=0,\ i=k_0+1,\ldots, n\}$ . In the following, we present the proof of Theorem 2.1, postponing some tedious calculations to Appendix A.

4.2.1. Step 1

We divide $\mathcal{A}$ into two sets, i.e.

\begin{align*}E_2(u)=\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}\in[0,\delta_i(u)], k_0+1\leq i\leq n\}, \qquad \delta_i(u)=\left(\frac{\ln u}{u}\right)^{2/\beta_i}, \quad k_0+1\leq i\leq n,\end{align*}

a neighborhood of $\mathcal{M}^*$ , which maximizes the variance of X(t) (with high probability the supremum is realized in $E_2(u)$ ) and the set $\mathcal{A}\setminus E_2(u)$ , over which the probability associated with supremum is asymptotically negligible. For the lower bound, we only consider the process over

\begin{align*}E_1(u)=&\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}\in[0,\delta_i(u)], k_0+1\leq i\leq k_1; t_{i}\in[0, u^{-2/\alpha_i}\lambda], k_1+1\leq i\leq k_2 ;\\&t_{i}=0, k_2+1\leq i\leq k_3\}, \quad \lambda>0,\end{align*}

a neighborhood of $\mathcal{M}^*$ .

To simplify notation, for $\Delta_1, \Delta_2 \subseteq\mathbb{R}^{n}$ , let

\begin{eqnarray*}\mathbf{P}_u\left(\Delta_1\right)\,:\!=\,\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Delta_1}X(\boldsymbol{t})>u \right \} ,\!\!\quad\mathbf{P}_u\left(\Delta_1,\Delta_2 \right)\,:\!=\,\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Delta_1}X(\boldsymbol{t})>u,\sup\nolimits_{\boldsymbol{t}\in\Delta_2}X(\boldsymbol{t})>u \right \}\!.\end{eqnarray*}

For any $u>0$ , we have

(4.1) \begin{eqnarray} \quad \mathbf{P}_u\left(E_1(u)\right)\leq \mathbf{P}_u\left(\mathcal{A}\right)\leq \mathbf{P}_u\left(E_2(u)\right)+\mathbf{P}_u\left(\mathcal{A}\setminus E_2(u)\right)\!.\end{eqnarray}

Note that, in light of [Reference Piterbarg31, Theorem 8.1], by (2.4) in assumption (A1) and (2.7) in assumption (A2), for sufficiently large u,

(4.2) \begin{eqnarray}\mathbf{P}_u\left(\mathcal{A}\setminus E_2(u)\right)\leq\mathbb{Q} v_n(\mathcal{A})u^{\sum_{i=1}^n{2}/{\alpha_i}}\Psi \left(\frac{u}{1-\mathbb{Q}_1({\ln u}/{u})^2}\right).\end{eqnarray}

4.2.2. Step 2

We divide $\mathcal{M}$ into small hypercubes such that

\begin{align*}\bigcup_{{\boldsymbol{r}}\in V^{-}}\mathcal{M}_{{\boldsymbol{r}}}\subset \mathcal{M}\subset \bigcup_{{\boldsymbol{r}}\in V^{+}}\mathcal{M}_{{\boldsymbol{r}}},\end{align*}

where

\begin{align*}\mathcal{M}_{{\boldsymbol{r}}}=\prod_{i=1}^{k_0}[r_iv,(r_i+1)v], \quad {\boldsymbol{r}}=(r_1,\dots,r_{k_0}), \,r_i\in \mathbb{Z}, \,1\leq i\leq k_0,\, v>0,\end{align*}

and

\begin{align*}V^{+}\,:\!=\,\{{\boldsymbol{r}}\colon \mathcal{M}_{{\boldsymbol{r}}}\cap \mathcal{M}\neq \emptyset\}, \qquad V^{-}\,:\!=\,\{{\boldsymbol{r}}\colon \mathcal{M}_{{\boldsymbol{r}}}\subset\mathcal{M}\}.\end{align*}

For fixed ${\boldsymbol{r}}$ , we analyze the supremum of X over a set related to $\mathcal{M}_{{\boldsymbol{r}}}$ . For this, let

\begin{align*}E_{1,{\boldsymbol{r}}}(u)=&\{\boldsymbol{t}\colon \tilde{\boldsymbol{t}}\in \mathcal{M}_{{\boldsymbol{r}}}\,;\, t_{i}\in[0,\delta_i(u)], k_0+1\leq i\leq k_1\,;\, t_{i}\in[0, u^{-2/\alpha_i}\lambda], k_1+1\leq i\leq k_2 \,;\,\\& t_{i}=0, k_2+1\leq i\leq k_3\},\\E_{2,{\boldsymbol{r}}}(u)=&\{\boldsymbol{t}\colon \tilde{\boldsymbol{t}}\in \mathcal{M}_{{\boldsymbol{r}}}\,;\, t_{i}\in[0,\delta_i(u)], k_0+1\leq i\leq n\}.\end{align*}

Moreover, define an auxiliary set

\begin{align*}E_{3,{\boldsymbol{r}}}(u)=\{ (\tilde{\boldsymbol{t}}, \bar{\boldsymbol{t}}_1,\bar{\boldsymbol{t}}_2)\colon \tilde{\boldsymbol{t}}\in\mathcal{M}_{{\boldsymbol{r}}}, t_{i}\in[0,\delta_i(u)], k_0+1\leq i\leq k_2 \}.\end{align*}

We next focus on $\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$ and $\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$ . The idea of the proof of this step is first to split $E_{1,{\boldsymbol{r}}}(u)$ and $E_{2,{\boldsymbol{r}}}(u)$ into tiny hyperrectangles and uniformly derive the tail probability asymptotics on each hyperrectangle. Then, we apply the Bonferroni inequality to demonstrate that the asymptotics over $E_{i,{\boldsymbol{r}}}(u)$ for $i=1,2$ are the sum of the asymptotics over the corresponding hyperrectangles, respectively.

To this end, we introduce the following notation. For some $\lambda>0$ , let

\begin{gather*}I_{u,i}(l)=\left[l\frac{\lambda}{u^{2/\alpha_i}},(l+1) \frac{\lambda}{u^{2/\alpha_i}}\right],\quad l\in\mathbb{N},\\ \boldsymbol{l}=(l_1,\dots,l_n), \qquad \boldsymbol{l}_j=(l_{k_{j-1}+1},\dots, l_{k_j}), \quad j=1,2,\\\mathcal{D}_{u}(\boldsymbol{l})=\left(\prod_{i=1}^{k_2}I_{u,i}(l_i)\right)\times \prod_{i=k_2+1}^n[0, \epsilon u^{-2/\alpha_i}],\\\mathcal{C}_u(\boldsymbol{l})=\left(\prod_{i=1}^{k_1}I_{u,i}(l_i)\right)\times \prod_{i=k_1+1}^{k_2}[0, \lambda u^{-2/\alpha_i}]\times \overline{\boldsymbol{0}}_3,\end{gather*}

with $\overline{\boldsymbol{0}}_3=(0,\dots,0)\in\mathbb{R}^{n-k_2}$ and

\begin{eqnarray*}M_i(u)=\left\lfloor\frac{vu^{2/\alpha_i}}{\lambda}\right\rfloor,\quad 1\leq i\leq k_0, \qquad M_i(u)=\left\lfloor\frac{\delta_i(u)u^{2/\alpha_i}}{\lambda}\right\rfloor, \quad k_0+1\leq i\leq k_2.\end{eqnarray*}

In order to derive an upper bound for $\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$ and a lower bound for $\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$ , we introduce the following notation for some $\epsilon\in(0,1)$ :

\begin{gather*}\mathcal{L}_{1}(u)=\left\{\boldsymbol{l}\colon \prod_{i=1}^{k_2}I_{u,i}(l_i)\subset E_{3,{\boldsymbol{r}}}(u), l_i=0, k_1+1\leq i\leq n\right\},\\\mathcal{L}_{2}(u)=\left\{\boldsymbol{l}\colon \left(\prod_{i=1}^{k_2}I_{u,i}(l_i)\right)\cap E_{3,{\boldsymbol{r}}}(u)\neq\emptyset, l_i=0, k_1+1\leq i\leq n\right\},\\\mathcal{L}_{3}(u)=\left\{\boldsymbol{l}\colon \left(\prod_{i=1}^{k_2}I_{u,i}(l_i)\right)\cap E_{3,{\boldsymbol{r}}}(u)\neq\emptyset, \sum_{i=k_1+1}^{k_2}l_i^2>0, l_i=0, k_2+1\leq i\leq n\right\},\\\mathcal{K}_1(u)=\left\{(\boldsymbol{l},\boldsymbol{j})\colon \boldsymbol{l},\boldsymbol{j}\in\mathcal{L}_1(u),\mathcal{C}_u(\boldsymbol{l})\cap \mathcal{C}_u(\boldsymbol{j})\neq\emptyset \right\},\\\mathcal{K}_2(u)=\left\{(\boldsymbol{l},\boldsymbol{j})\colon \boldsymbol{l},\boldsymbol{j}\in\mathcal{L}_1(u),\mathcal{C}_u(\boldsymbol{l})\cap \mathcal{C}_u(\boldsymbol{j})=\emptyset \right\},\\u^{-\epsilon}_{\boldsymbol{l}_1}=u\left(1+(1-\epsilon) \inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1,\boldsymbol{l}_1+1]}p_{1,{\boldsymbol{r}}}^- g_1(u^{-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)\right),\\u^{+\epsilon}_{\boldsymbol{l}_1}=u\left(1+(1+\epsilon)\sup\nolimits_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1,\boldsymbol{l}_1+1]}p_{1,{\boldsymbol{r}}}^+ g_1(u^{-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)\right),\\p_{j,{\boldsymbol{r}}}^+=\sup\nolimits_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}p_j(\tilde{\boldsymbol{z}}), \qquad p_{j,{\boldsymbol{r}}}^-=\inf_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}p_j(\tilde{\boldsymbol{z}}), \quad j=1,2,3.\end{gather*}

The Bonferroni inequality gives, for sufficiently large u,

(4.3) \begin{eqnarray}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right)\geq \sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l})\right)-\sum_{i=1}^{2}\Gamma_i(u),\end{eqnarray}
(4.4) \begin{eqnarray}\mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right)\leq \sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)+\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)\!,\end{eqnarray}

where

\begin{eqnarray*}\Gamma_i(u)=\sum_{(\boldsymbol{l},\boldsymbol{j})\in\mathcal{K}_i(u)} \mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l}),\mathcal{C}_u(\boldsymbol{j}) \right),\quad i=1,2.\end{eqnarray*}

We first derive the upper bound of $\mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right)$ as $u\to\infty$ . To this end, we need to find the upper bounds of $\sum_{\boldsymbol{l}\in\mathcal{L}_{j}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right), j=2,3$ , separately.

Upper bound for $\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)$ . By (2.6) in assumption (A2), we have, for sufficiently large u,

\begin{align*}\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)& \leq \sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{D}_{u}(\boldsymbol{l})}\frac{\overline{X}(\boldsymbol{t})}{1+(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_2(\bar{\boldsymbol{t}}_2)} >u_{\boldsymbol{l}_1}^{-\epsilon} \right \} \\ &=\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E(\boldsymbol{l},u)}\frac{X_{u,\boldsymbol{l}}(\boldsymbol{t})}{1+(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_2(u^{-2/\boldsymbol{\alpha}_2}(\boldsymbol{a}_2(\tilde{\boldsymbol{z}}(\boldsymbol{l},u)))^{-1}\bar{\boldsymbol{t}}_2)} >u_{\boldsymbol{l}_1}^{-\epsilon} \right \}\!,\end{align*}

where

(4.5) \begin{eqnarray}X_{u,\boldsymbol{l}}(\boldsymbol{t})=\overline{X}\left(u^{-2/\alpha_1}(l_1\lambda+(a_{1}(\tilde{\boldsymbol{z}}(\boldsymbol{l},u)))^{-1}t_1),\dots, u^{-2/\alpha_n}(l_n\lambda+(a_{n}(\tilde{\boldsymbol{z}}(\boldsymbol{l},u))^{-1}t_n)\right)\!,\end{eqnarray}

with

\begin{align*}\tilde{\boldsymbol{z}}(\boldsymbol{l},u)=(u^{-2/\alpha_1}l_1, \dots, u^{-2/\alpha_{k}}l_k)\end{align*}

and

\begin{align*}E(\boldsymbol{l},u)=\left(\prod_{i=1}^{k_2}[0, a_i(\tilde{\boldsymbol{z}}(\boldsymbol{l},u))\lambda]\right)\times \prod_{i=k_2+1}^{n}[0, a_i(\tilde{\boldsymbol{z}}(\boldsymbol{l},u))\epsilon].\end{align*}

Note that by (2.7) in assumption (A2),

\begin{align*}u^{-2}g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)\leq g_2(u^{-2/\boldsymbol{\alpha}_2}(\boldsymbol{a}_2(\tilde{\boldsymbol{z}}(\boldsymbol{l},u)))^{-1}\bar{\boldsymbol{t}}_2)=u^{-2}g_2((\boldsymbol{a}_2(\tilde{\boldsymbol{z}}(\boldsymbol{l},u)))^{-1}\bar{\boldsymbol{t}}_2)\leq u^{-2}g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2),\end{align*}

where

\begin{align*}g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)=\inf_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}g_2((\boldsymbol{a}_2(\tilde{\boldsymbol{z}})^{-1}\bar{\boldsymbol{t}}_2), \qquad g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)=\sup\nolimits_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}g_2((\boldsymbol{a}_2(\tilde{\boldsymbol{z}})^{-1}\bar{\boldsymbol{t}}_2).\end{align*}

Moreover,

\begin{align*} E_{{\boldsymbol{r}}}^-\subset E(\boldsymbol{l},u)\subset E_{{\boldsymbol{r}}}^+,\end{align*}

where

\begin{align*}E_{{\boldsymbol{r}}}^+\,:\!=\,\left(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda]\right)\times \prod_{i=k_2+1}^{n}[0,a_{i,{\boldsymbol{r}}}^+\epsilon], \qquad E_{{\boldsymbol{r}}}^-\,:\!=\,\left(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^-\lambda]\right)\times \prod_{i=k_2+1}^{n}[0,a_{i,{\boldsymbol{r}}}^-\epsilon]\end{align*}

with

\begin{align*}a_{i,{\boldsymbol{r}}}^+=\sup\nolimits_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}a_i(\tilde{\boldsymbol{z}}), \qquad a_{i,{\boldsymbol{r}}}^-=\inf_{\tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}}a_i(\tilde{\boldsymbol{z}}).\end{align*}

Hence,

(4.6) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)\leq \sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_{{\boldsymbol{r}}}^+}\frac{X_{u,\boldsymbol{l}}(\boldsymbol{t})}{1+(1-\epsilon)u^{-2}p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)} >u_{\boldsymbol{l}_1}^{-\epsilon} \right \} .\end{eqnarray}

Applying Lemma 4.1, we obtain

(4.7) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u} (\boldsymbol{l})\right) \leq \frac{\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{ \boldsymbol{t}}_2)}(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda])}{\lambda^{k_1}}v^{k_0}\Theta^-(u),\quad u\rightarrow\infty.\end{eqnarray}

We refer to Appendix A.1 for the detailed calculations proving (4.7).

Upper bound for $\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)$ . We find a tight asymptotic upper bound for the second term displayed on the right-hand side of (4.4) using an approach similar to that used in deriving (4.7). For $\lambda>1$ , we get

(4.8) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u} (\boldsymbol{l})\right) \leq \mathbb{Q}_3\lambda^{k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}}v^{k_0}\Theta^-(u), \quad u\rightarrow\infty,\end{eqnarray}

where $\beta^*=\min_{i=k_1+1}^{k_2}(\beta_i).$ The detailed derivation of inequality (4.8) can be found in Appendix A.2.

Upper bound for $\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$ . The combination of (4.7) and (4.8) yields, for $\lambda>1$ and $u\rightarrow\infty$ ,

(4.9) \begin{eqnarray}\mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right)\leq \left(\frac{\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^- (\bar{\boldsymbol{t}}_2)}(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda])}{\lambda^{k_1}}+\mathbb{Q}_3\lambda^{k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}}\right)v^{k_0}\Theta^-(u).\end{eqnarray}

Next, we find a lower bound for $\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$ as $u\to\infty$ . To do this, we need to derive a lower bound for $\sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l})\right)$ and upper bounds for $\Gamma_i(u)$ , where $i=1,2$ .

Lower bound for $\sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l})\right)$ . Analogously to (4.7), we derive, as $u\rightarrow\infty, \epsilon\rightarrow 0$ ,

(4.10) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u (\boldsymbol{l})\right)\geq\frac{\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+ (\bar{\boldsymbol{t}}_2)}\left(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^-\lambda]\right)}{\lambda^{k_1}}v^{k_0}\Theta^+(u).\end{eqnarray}

Upper bound for $\Gamma_i(u),\ i=1,2$ . Applying an approach analogous to that of the proof of Theorem 8.2 in [Reference Piterbarg31], we have, for $\lambda>1$ , as $u\to\infty$ ,

(4.11) \begin{align}\Gamma_1(u)&\leq\mathbb{Q}_4\lambda^{-1/2}\lambda^{2k_2-k_1} v^{k_0}\Theta^-(u), \end{align}

(4.12) \begin{align}\Gamma_{2}(u)&\leq\mathbb{Q}_{5}\lambda^{2k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_{6}\lambda^{\alpha^*}}v^{k_0}\Theta^-(u),\end{align}

where $\alpha^*=\max(\alpha_1,\dots, \alpha_{k_1})$ and $\mathbb{Q}_i,\ i=4,5,6$ are some positive constants.

Lower bound for $\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$ . Inserting (4.10), (4.11), and (4.12) into (4.3), we obtain, for $\lambda>1$ , as $u\rightarrow\infty$ ,

(4.13) \begin{align}&\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))\nonumber\\&\quad\geq \bigg(\frac{\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+ (\bar{\boldsymbol{t}}_2)}(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^-\lambda])}{\lambda^{k_1}}-\mathbb{Q}_4\lambda^{-1/2}- \mathbb{Q}_{5}\lambda^{2k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_{6}\lambda^{\alpha^*}}\bigg) v^{k_0}\Theta^+(u).\end{align}

4.2.3. Step 3

In this step of the proof, we sum up the asymptotics derived in step 2. Set

\begin{align*}\Theta_1(u)=u^{\sum_{i=1}^{k_1}{2}/{\alpha_i}-\sum_{i=k+1}^{k_1}{2}/{\beta_i}}\Psi(u).\end{align*}

Letting $\lambda\rightarrow \infty$ in (4.9) and (4.13), it follows that

(4.14) \begin{align}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right) &\geq \mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^-\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^+ g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1v^{k_0}\Theta_1(u),\nonumber\\ \mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right) &\leq \mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^+\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^- g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1v^{k_0}\Theta_1(u).\end{align}

We sum $\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$ (and $\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$ ) with respect to ${\boldsymbol{r}}$ to obtain a lower bound for $\mathbf{P}_u(E_1(u))$ (and an upper bound for $\mathbf{P}_u(E_2(u))$ ). Observe that

(4.15) \begin{align}\mathbf{P}_u\left(E_1(u)\right)&\geq \sum_{{\boldsymbol{r}}\in V^-}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right)-\sum_{{\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, {\boldsymbol{r}}\neq {\boldsymbol{r}}'}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right),\\\mathbf{P}_u\left(E_2(u)\right)&\leq \sum_{{\boldsymbol{r}}\in V^+}\mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right).\nonumber\end{align}

By applying (4.14) and demonstrating that the double-sum term in (4.15) is asymptotically negligible, we obtain

(4.16) \begin{align}\liminf_{u \to \infty} \frac{\mathbf{P}_u\left(E_{1}(u)\right)}{\Theta_1(u)}\geq \int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1 \in [0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)} \,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right)\,{\mathrm{d}}\tilde{\boldsymbol{z}}\end{align}

and

(4.17) \begin{align}\limsup_{u\rightarrow\infty}\frac{\mathbf{P}_u\left(E_2(u)\right)}{\Theta_1(u)}\leq\int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)} \,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right) \,{\mathrm{d}}\tilde{\boldsymbol{z}},\end{align}

as $v\rightarrow 0$ . The detailed derivation of (4.16) and (4.17) is delegated to Appendix A.3.

The proof is completed by combining (4.16) and (4.17) with (4.1) and (4.2).

Appendix A. Complementary derivations for the proof of Theorem 2.1

In this section we provide detailed derivations of (4.7), (4.8), (4.16), and (4.17), and we prove the positivity of $\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}$ .

A.1. Proof of (4.7)

We begin with aligning the notation used in Lemma 4.1 with that used in Theorem 2.1. Let $X_{u,\boldsymbol{l}}$ be as in (4.5), and let

\begin{align*}u_{\boldsymbol{l}}=u_{\boldsymbol{l}_1}^{-\epsilon},\qquad g_{u,\boldsymbol{l}}(\boldsymbol{t})=(1-\epsilon)u^{-2} p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2),\qquad K_{u}=\mathcal{L}_{2}(u).\end{align*}

We note that $\lim_{u\rightarrow\infty}\inf_{\boldsymbol{l}\in \mathcal{L}_2(u)}u_{\boldsymbol{l}_1}^{-\epsilon}=\infty$ , which combined with continuity of $g_2$ implies that

\begin{align*}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\sup\nolimits_{\boldsymbol{t}\in E_{{\boldsymbol{r}}}^+}\left|u_{\boldsymbol{l}}^2g_{u,\boldsymbol{l}}(\boldsymbol{t})-(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)\right|=0.\end{align*}

Therefore, (C1) holds with $g(\bar{\boldsymbol{t}})=(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)$ . By (2.2) and (2.3) in assumption (A1), using the homogeneity of the increments of W for fixed $\bar{\boldsymbol{t}}_2$ and $ \bar{\boldsymbol{t}}_3$ , we have

\begin{eqnarray*}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\sup\nolimits_{\boldsymbol{s},\boldsymbol{t}\in E_{{\boldsymbol{r}}}^+}\left|u_{\boldsymbol{l}}^2{\mathrm{Var}}(X_{u,\boldsymbol{l}}(\boldsymbol{t})-X_{u,\boldsymbol{l}}(\boldsymbol{s}))-2{\mathrm{Var}}(W(\boldsymbol{t})-W(\boldsymbol{s}))\right|=0.\end{eqnarray*}

Hence, (C2) is satisfied with the limiting stochastic process W defined in (A1). Assumption (C3) follows directly from (2.4) in assumption (A1). Therefore, we conclude that

(A.1) \begin{align}{\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in K_u}\left|\frac{\mathbb{P} \{\sup\nolimits_{\boldsymbol{t}\in E_{{\boldsymbol{r}}}^+}({X_{u,\boldsymbol{l}}(\boldsymbol{t})} / {(1+(1-\epsilon)u^{-2}p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^- (\bar{\boldsymbol{t}}_2))}) >u_{\boldsymbol{l}_1}^{-\epsilon} \} }{\Psi(u_{\boldsymbol{l}}^{-\epsilon})}\right.\nonumber\\-\left.\mathcal{H}_W^{(1- \epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^- (\bar{\boldsymbol{t}}_2)}\left(E_{{\boldsymbol{r}}}^+\right)\right|=0,}\end{align}

where

\begin{equation*}\mathcal{H}_W^{(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-( \bar{\boldsymbol{t}}_2)}(E_{{\boldsymbol{r}}}^+)=\mathbb{E} \left\{\sup\nolimits_{\boldsymbol{t}\in E_{{\boldsymbol{r}}}^+}{\mathrm{e}}^{ \sqrt{2}W(\boldsymbol{t})-\sigma^2_W(\boldsymbol{t})-(1-\epsilon) p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}\right\}.\end{equation*}

Therefore, we have, as $u\rightarrow\infty$ ,

(A.2) \begin{align}&\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}X_{u,\boldsymbol{l}}(\boldsymbol{t}) >u_{\boldsymbol{l}}^{-\epsilon} \right \} \nonumber\\&\qquad\leq \sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)} \mathcal{H}_W^{(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}(E_{{\boldsymbol{r}}}^+) \Psi(u_{\boldsymbol{l}}^{-\epsilon})\nonumber \\ &\qquad \leq \mathcal{H}_W^{(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}(E_{{\boldsymbol{r}}}^+) \Psi(u)\left(\prod_{i=1}^{k_0}\frac{vu^{2/\alpha_i}}{\lambda}\right)\nonumber \\ &\hskip30pt \times\sum_{i=k_0+1}^{k_1}\sum_{l_i=0}^{M_i(u)} {\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1,\boldsymbol{l}_1+1]}p_{1,{\boldsymbol{r}}}^- g_1(u^{2/\boldsymbol{\beta}_1-2/\boldsymbol{\alpha}_1}\lambda \bar{\boldsymbol{t}}_1)}\nonumber \\&\hskip30pt \sim\frac{\mathcal{H}_W^{(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2, {\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}(E_{{\boldsymbol{r}}}^+)}{\lambda^{k_1}} v^{k_0}\Psi(u)u^{\sum_{i=1}^{k_1}{2}/{\alpha_i}-\sum_{i=k_0+1}^{k_1}{2}/{\beta_i}}\nonumber\\&\hskip30pt\times \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k_0}} {\mathrm{e}}^{-(1-\epsilon)p_{1,{\boldsymbol{r}}}^-g_1(\bar{\boldsymbol{t}}_1)} \,{\mathrm{d}}\bar{\boldsymbol{t}}_1.\end{align}

Note that

\begin{align*}\lim_{\epsilon\rightarrow 0}\mathcal{H}_W^{(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^- (\bar{\boldsymbol{t}}_2)}(E_{{\boldsymbol{r}}}^+)& =\mathbb{E}\left\{\sup\nolimits_{(\tilde{\boldsymbol{t}}, \bar{\boldsymbol{t}}_1,\bar{\boldsymbol{t}}_2)\in\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda]}{\mathrm{e}}^{ \sqrt{2}W(\tilde{\boldsymbol{t}}, \bar{\boldsymbol{t}}_1,\bar{\boldsymbol{t}}_2,\bar{\boldsymbol{0}}_3) -\sigma^2_W(\tilde{\boldsymbol{t}}, \bar{\boldsymbol{t}}_1,\bar{\boldsymbol{t}}_2,\bar{\boldsymbol{0}}_3) -p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}\right\}\\& \,:\!=\,\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)}\left(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda]\right)\end{align*}

and by the dominated convergence theorem, it follows that

\begin{align*}\lim_{\epsilon\rightarrow 0}\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k_0}} {\mathrm{e}}^{-(1-\epsilon)p_{1,{\boldsymbol{r}}}^-g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1=\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k_0}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^-g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1.\end{align*}

Hence, letting $\epsilon\rightarrow 0$ in (A.2), we have

(A.3) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right) \leq \frac{\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^- (\bar{\boldsymbol{t}}_2)}(\prod_{i=1}^{k_2}[0, a_{i,{\boldsymbol{r}}}^+\lambda])}{\lambda^{k_1}}v^{k_0}\Theta^-(u),\quad u\rightarrow\infty,\end{eqnarray}

where

\begin{align*}\Theta^\pm(u)\,:\!=\,\Psi(u)u^{\sum_{i=1}^{k_1}{2}/{\alpha_i} -\sum_{i=k+1}^{k_1}{2}/{\beta_i}}\int_{\bar{\boldsymbol{t}}_1 \in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^\pm g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1.\end{align*}

A.2. Proof of (4.8)

For sufficiently large u,

\begin{eqnarray*}\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)\leq \sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{D}_{u}(\boldsymbol{l})}\overline{X}(\boldsymbol{t}) >u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon} \right \} =\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}\widetilde{X}_{u,\boldsymbol{l}}(\boldsymbol{t}) >u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon} \right \} ,\end{eqnarray*}

where

\begin{eqnarray*}\widetilde{X}_{u,\boldsymbol{l}}(\boldsymbol{t})=\overline{X}(u^{-2/\alpha_1}(l_1\lambda+t_1),\dots, u^{-2/\alpha_n}(l_n\lambda+t_n)),\qquad E=[0,\lambda]^{k_2}\times [0,\epsilon]^{n-k_2},\end{eqnarray*}
\begin{align*}u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon}=u\left(1+(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1, \boldsymbol{l}_1+1]}g_1(u^{-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)+(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_2\in [\boldsymbol{l}_2, \boldsymbol{l}_2+1]}g_2(u^{-2/\boldsymbol{\alpha}_2}\lambda\bar{\boldsymbol{t}}_2)\right).\end{align*}

Let $Z_u(\boldsymbol{t})$ be a homogeneous Gaussian random field with variance 1 and the correlation function satisfying

(A.4) \begin{eqnarray}r_u(\boldsymbol{s},\boldsymbol{t})={\mathrm{e}}^{-u^{-2}2\mathcal{Q}_2\sum_{i=1}^n \left\lvert s_i-t_i \right\rvert^{\alpha_i}}.\end{eqnarray}

According to (2.4), under assumption (A1) and applying Slepian’s inequality (see [Reference Adler and Taylor2, Theorem 2.2.1]), we find that, for sufficiently large u,

\begin{align*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}\widetilde{X}_{u,\boldsymbol{l}}(\boldsymbol{t}) >u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon} \right \} \leq \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E}Z_u(\boldsymbol{t}) >u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon} \right \} , \quad \boldsymbol{l}\in \mathcal{L}_3(u).\end{align*}

Similarly as in the proof of (A.1), we have

(A.5) \begin{eqnarray}\lim_{u\rightarrow\infty}\sup\nolimits_{\boldsymbol{l}\in \mathcal{L}_3(u)}\left|\frac{\mathbb{P} \{\sup\nolimits_{\boldsymbol{t}\in E}Z_u(\boldsymbol{t})>u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon} \} }{\Psi(u_{\boldsymbol{l}_1,\boldsymbol{l}_2}^{-\epsilon})}-\mathcal{J}(E)\right|=0,\end{eqnarray}

where

\begin{eqnarray*}\mathcal{J}(E)=\left(\prod_{i=1}^{k_2}\mathcal{H}_{B^{\alpha_i}}[0,(2\mathcal{Q}_2)^{1/\alpha_i}\lambda]\right)\left(\prod_{i=k_2+1}^n\mathcal{H}_{B^{\alpha_i}}[0,\epsilon(2\mathcal{Q}_2)^{1/\alpha_i}\lambda]\right).\end{eqnarray*}

Hence, using the above asymptotics and (2.7) in assumption (A2),

\begin{align*}&\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)} \mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)\\&\qquad\leq \sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathcal{J}(E)\Psi(u_{\boldsymbol{l}_1.\boldsymbol{l}_2}^{-\epsilon})\\&\qquad\leq \mathcal{J}(E)\Psi(u)\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}{\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1, \boldsymbol{l}_1+1]}u^2g_1(u^{-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_2\in [\boldsymbol{l}_2, \boldsymbol{l}_2+1]}u^2g_2(u^{-2/\boldsymbol{\alpha}_2}\lambda\bar{\boldsymbol{t}}_2)}\\&\qquad\leq\mathcal{J}(E)\Psi(u)\left(\prod_{i=1}^{k_0}\frac{vu^{2/\alpha_i}}{\lambda}\right)\sum_{i=k_0+1}^{k_1}\sum_{l_i=0}^{M_i(u)} {\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1, \boldsymbol{l}_1+1]}g_1(u^{2/\boldsymbol{\beta}_1-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)}\\&\hskip30pt \times \sum_{l_{k_1+1}^2+\cdots+l_{k_2}^2\geq 1, l_i\geq 0, k_1+1\leq i\leq k_2}{\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_2\in [\boldsymbol{l}_2, \boldsymbol{l}_2+1]}g_2(u^{2/\boldsymbol{\beta}_2-2/\boldsymbol{\alpha}_2}\lambda\bar{\boldsymbol{t}}_2)}.\end{align*}

Moreover, the direct calculation shows that

\begin{eqnarray*}&&\sum_{i=k_0+1}^{k_1}\sum_{l_i=0}^{M_i(u)} {\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_1\in [\boldsymbol{l}_1, \boldsymbol{l}_1+1]}g_1(u^{2/\boldsymbol{\beta}_1-2/\boldsymbol{\alpha}_1}\lambda\bar{\boldsymbol{t}}_1)}\\&&\qquad \sim u^{\sum_{i=k_0+1}^{k_1}({2}/{\alpha_i}-{2}/{\beta_i})}\lambda^{k_0-k_1}\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k_0}} {\mathrm{e}}^{-(1-\epsilon)g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1, \quad u\rightarrow\infty.\end{eqnarray*}

Given the assumption (2.7) and the fact that $\boldsymbol{\alpha}_2 = \boldsymbol{\beta}_2$ , we find that, for $\lambda > 1$ ,

\begin{align*}&\sum_{l_{k_1+1}^2+\cdots+l_{k_2}^2\geq 1, l_i\geq 0, k_1+1\leq i\leq k_2}{\mathrm{e}}^{-(1-\epsilon)\inf_{\bar{\boldsymbol{t}}_2\in [\boldsymbol{l}_2, \boldsymbol{l}_2+1]}g_2(u^{2/\boldsymbol{\beta}_2-2/\boldsymbol{\alpha}_2}\lambda\bar{\boldsymbol{t}}_2)}\\&\qquad \leq \sum_{l_{k_1+1}^2+\cdots+l_{k_2}^2\geq 1, l_i\geq 0, k_1+1\leq i\leq k_2}{\mathrm{e}}^{-(1-\epsilon)c_{2,1}\sum_{i=k_1+1}^{k_2}(l_i\lambda)^{\beta_i}}\\&\qquad\leq \mathbb{Q}_3{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}},\end{align*}

where $\beta^*=\min_{i=k_1+1}^{k_2}(\beta_i).$ In addition,

\begin{eqnarray*}\lim_{\epsilon\rightarrow 0}\mathcal{J}(E)=\prod_{i=1}^{k_2}\mathcal{H}_{B^{\alpha_i}}[0, (2\mathcal{Q}_2)^{1/\alpha_i}\lambda]\end{eqnarray*}

and, for $\lambda>1$ ,

\begin{eqnarray*}\prod_{i=1}^{k_2}\mathcal{H}_{B^{\alpha_i}}[0,(2\mathcal{Q}_2)^{1/\alpha_i}\lambda]\leq \mathbb{Q}_3\lambda^{k_2}.\end{eqnarray*}

Thus, for $\lambda>1$ ,

(A.6) \begin{eqnarray}\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right) \leq \mathbb{Q}_3\lambda^{k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}}v^{k_0}\Theta^-(u),\quad u\rightarrow\infty.\end{eqnarray}

A.3. Proof of (4.16) and (4.17)

Note that $g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)\in \mathcal{G}, {\boldsymbol{r}}\in V^+$ and $p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)\in \mathcal{G},\tilde{\boldsymbol{z}}\in \mathcal{M}$ with fixed c and $\boldsymbol{\beta}_{2}$ . Thus, (A.11) implies that, for any $\epsilon>0$ , there exists $\lambda_0>0$ such that, for any $\lambda>\lambda_0>0$ and ${\boldsymbol{r}}\in V^+$ and $\tilde{\boldsymbol{z}}\in\mathcal{M}$ ,

(A.7) \begin{align}&\left|\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}- \mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})\lambda^{-k_1}\right|<\epsilon,\nonumber\\&\left|\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}-\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})\lambda^{-k_1}\right|<\epsilon.\end{align}

Hence, it follows that, as $u\rightarrow\infty$ and $\lambda>\lambda_0$ ,

\begin{align*}&\frac{\sum_{{\boldsymbol{r}}\in V^-}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right)}{\Theta_1(u)}\\&\quad\geq \sum_{{\boldsymbol{r}}\in V^-}\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^-\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^+ g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1v^{k_0}\nonumber\\&\quad\geq\int_{ \mathcal{M}}\sum_{{\boldsymbol{r}}\in V^-}\bigg((\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_1})\lambda^{-k_1}-\epsilon)\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^-\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^+ g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\bigg) \mathbb{I}_{\mathcal{M}_{{\boldsymbol{r}}}}(\tilde{\boldsymbol{z}})\,{\mathrm{d}}\tilde{\boldsymbol{z}}.\end{align*}

Note that, for any fixed $\tilde{\boldsymbol{z}}\in\mathcal{M}^o$ , where $\mathcal{M}^o\subset \mathcal{M}$ is the interior of $\mathcal{M}$ ,

\begin{align*}&\lim_{v\rightarrow 0}\sum_{{\boldsymbol{r}}\in V^-}\left((\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_1})\lambda^{-k_1}-\epsilon)\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^-\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^+ g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right) \mathbb{I}_{\mathcal{M}_{{\boldsymbol{r}}}}(\tilde{\boldsymbol{z}})\\&\qquad = (\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_1})\lambda^{-k_1}-\epsilon) \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\\&\qquad \geq (\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)}-2\epsilon) \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\\&\qquad \geq \mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1,\quad \epsilon \rightarrow 0.\end{align*}

Moreover, it is clear that there exists $\mathbb{Q}<\infty$ such that, for any $\lambda>1$ and $v>0$ ,

\begin{align*}\left((\mathcal{H}_W^{p_{2,{\boldsymbol{r}}}^+g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_1})\lambda^{-k_1}-\epsilon)\prod_{i=1}^{k_1} a_{i,{\boldsymbol{r}}}^-\int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_{1,{\boldsymbol{r}}}^+ g_1(\bar{\boldsymbol{t}})}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right) \mathbb{I}_{\mathcal{M}_{{\boldsymbol{r}}}}<\mathbb{Q}_8.\end{align*}

Consequently, the dominated convergence theorem gives

(A.8) \begin{align}&\liminf_{u\rightarrow\infty}\frac{\sum_{{\boldsymbol{r}}\in V^-}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right)}{\Theta_1(u)}\nonumber\\&\qquad \geq \int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)} \,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right)\,{\mathrm{d}}\tilde{\boldsymbol{z}}.\end{align}

Next, we focus on the double-sum term in (4.15). For ${\boldsymbol{r}}\in V^-, {\boldsymbol{r}}'\in V^-, M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}=\emptyset$ , we have

\begin{align*}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right)\leq \mathbb{P}\left(\sup\nolimits_{\boldsymbol{s}\in E_{1,{\boldsymbol{r}}}, \boldsymbol{t}\in E_{1,{\boldsymbol{r}}'}} X(\boldsymbol{s})+X(\boldsymbol{t})>2u\right).\end{align*}

By (2.5) in assumption (A1), there exists $0<\delta<1$ such that, for all ${\boldsymbol{r}}\in V^-, {\boldsymbol{r}}'\in V^-, M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}=\emptyset$ ,

\begin{align*}\sup\nolimits_{\boldsymbol{s}\in E_{1,{\boldsymbol{r}}}, \boldsymbol{t}\in E_{1,{\boldsymbol{r}}'}} {\mathrm{Var}}(X(\boldsymbol{s})+X(\boldsymbol{t}))<4-\delta.\end{align*}

According to the Borell-TIS inequality (see, for example, [Reference Adler and Taylor2, Theorem 2.1.1]), for $u>a$ , we have

\begin{align*}\mathbb{P}\left(\sup\nolimits_{\boldsymbol{s}\in E_{1,{\boldsymbol{r}}}, \boldsymbol{t}\in E_{1,{\boldsymbol{r}}'}} X(\boldsymbol{s}) + X(\boldsymbol{t}) > 2u\right) \leq {\mathrm{e}}^{-{4(u-a)^2}/{2(4-\delta)}},\end{align*}

where $a={\mathbb{E}(\!\sup\nolimits_{\boldsymbol{s}\in \mathcal{A}, \boldsymbol{t}\in \mathcal{A}} X(\boldsymbol{s})+X(\boldsymbol{t}))}/{2}=\mathbb{E}(\sup\nolimits_{\boldsymbol{t}\in \mathcal{A}}X(\boldsymbol{t}))$ . Consequently,

\begin{align*}\sum_{{\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}= \emptyset}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right)\leq \mathbb{Q} {\mathrm{e}}^{-{4(u-a)^2}/{2(4-\delta)}}=o(\Theta_1(u)),\quad u\to\infty.\end{align*}

For ${\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, {\boldsymbol{r}}\neq {\boldsymbol{r}}', M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}\neq\emptyset$ ,

\begin{align*}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right)=\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u)\right)+\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}'}\right)-\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right).\end{align*}

In light of (A.7) and (A.8), we have

\begin{align*}\sum_{{\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, {\boldsymbol{r}}\neq {\boldsymbol{r}}', M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}\neq\emptyset}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right)=o(\Theta_1(u)), \quad u\to\infty, v\to 0.\end{align*}

Therefore, we have

\begin{align*}\sum_{{\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, {\boldsymbol{r}}\neq {\boldsymbol{r}}'}\mathbf{P}_u\left(E_{1,{\boldsymbol{r}}}(u), E_{1,{\boldsymbol{r}}'}(u)\right)=o(\Theta_1(u)), \quad u\to\infty, v\to 0,\end{align*}

implying that

\begin{align*}\liminf_{u\rightarrow\infty}\frac{\mathbf{P}_u\left(E_{1}(u)\right)}{\Theta_1(u)}\geq \int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)} \,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right)\,{\mathrm{d}}\tilde{\boldsymbol{z}}.\end{align*}

Similarly, we can obtain, as $v\rightarrow 0$ ,

\begin{eqnarray*}\limsup_{u\rightarrow\infty}\frac{\mathbf{P}_u\left(E_2(u)\right)}{\Theta_1(u)}\leq\int_{\mathcal{M}}\left(\mathcal{H}_W^{p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)} \left(\prod_{i=1}^{k_1}a_i(\tilde{\boldsymbol{z}})\right) \int_{\bar{\boldsymbol{t}}_1\in[0,\infty)^{k_1-k}} {\mathrm{e}}^{-p_1(\tilde{\boldsymbol{z}})g_1(\bar{\boldsymbol{t}}_1)}\,{\mathrm{d}}\bar{\boldsymbol{t}}_1\right) \,{\mathrm{d}}\tilde{\boldsymbol{z}}.\end{eqnarray*}

A.4. Existence of $\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}$

We follow a similar idea as that used in the proof of Lemmas 7.1 and 8.3 in [Reference Piterbarg31]. Thus, we present only the main steps of the argument. We assume that

\begin{align*}\boldsymbol{a}(\tilde{\boldsymbol{z}})=1,\qquad p_j(\tilde{\boldsymbol{z}})=1, \quad j=1,2,3, \, \tilde{\boldsymbol{z}}\in \mathcal{M}_{{\boldsymbol{r}}}.\end{align*}

Dividing (4.9) and (4.13) by $v^{k_0}\Theta^-(u)$ and letting $u\rightarrow\infty$ , we derive that

\begin{align*}\limsup_{\lambda\rightarrow\infty}\frac{\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})}{\lambda^{k_1}}\leq\liminf_{\lambda\rightarrow\infty}\frac{\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})}{\lambda^{k_1}}<\infty.\end{align*}

The positivity of the above limit follows from the same arguments as in [Reference Piterbarg31]. Therefore,

(A.9) \begin{eqnarray}\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}\,:\!=\,\lim_{\lambda\rightarrow\infty} \frac{\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})}{\lambda^{k_1}}\in (0,\infty).\end{eqnarray}

Moreover, using (4.9) and (4.13), we have, for $\lambda>1$ ,

(A.10) \begin{eqnarray}\left|\frac{\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})}{\lambda^{k_1}}-\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}\right| \leq \mathbb{Q}_7(\lambda^{-1/2}+\lambda^{2k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_{6}\lambda^{\alpha^*}}+\lambda^{k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}}).\end{eqnarray}

Let $\mathcal{G}\,:\!=\, \{g_2: \text{g}_2$ $ \mathrm{is continuous}, u g_2(\bar{\boldsymbol{t}}_2)=g_2(u^{1/{ \boldsymbol{\beta}_{2}}}\bar{\boldsymbol{t}}_{2}), u>0, \inf_{\sum_{i=k_{1}+1}^{k_2}|t_i|^{\beta_i}=1}g_2(\bar{\boldsymbol{t}}_2)>c>0\},$ where c and $\boldsymbol{\beta}_{2}$ are fixed. For any $g_2\in\mathcal{G}$ , (4.7) and (4.8)–(4.13) are still valid. Hence, (A.10) also holds. This implies that, for any $\lambda>1$ ,

(A.11) \begin{align} \sup\nolimits_{g_2\in \mathcal{G}}\left|\frac{\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}([0,\lambda]^{k_2})}{\lambda^{k_1}}-\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}\right| \leq \mathbb{Q}_7(\lambda^{-1/2}+\lambda^{2k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_{6}\lambda^{\alpha^*}}+\lambda^{k_2-k_1}{\mathrm{e}}^{-\mathbb{Q}_2\lambda^{\beta^*}}).\end{align}

Appendix B. Proof of Proposition 3.1

For $Z^{\alpha}(\boldsymbol{t})$ introduced in (3.1), we write $\sigma_Z^2$ for the variance of $Z^{\alpha}$ and $r_Z$ for its correlation function. Moreover, let $\sigma_*= \max_{\boldsymbol{t}\in \mathcal{S}_{n}} \sigma_Z(\boldsymbol{t})$ and recall that $\mathcal{S}_{n}=\{0=t_0\leq t_1\leq\cdots\leq t_n\leq t_{n+1}=1\}$ . The expansions of $\sigma_Z$ and $r_Z$ are displayed in the following lemma, which is crucial for the proof of Proposition 3.1. We skip its proof as it only needs some standard but tedious calculations.

Lemma B.1. (i) For $\alpha \in (0,1)$ , the standard deviation $\sigma_Z$ attains its maximum on $\mathcal{S}_{n}$ at only one point $\boldsymbol{z}_0= (z_1,\ldots, z_{n})\in \mathcal{S}_{n}$ with $z_i={\sum_{j=1}^ia_j^{{2}/{(1-\alpha)}}}/{\sum_{j=1}^{n+1}a_j^{{2}/{(1-\alpha)}}},\ i=1,\ldots, n,$ and its maximum value is $\sigma_*=(\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})^{{(1-\alpha)}/{2}}.$ Moreover,

(B.1) \begin{equation}\lim_{\delta\rightarrow0}\underset{\lvert \boldsymbol{t}-\boldsymbol{z}_0 \rvert\leq\delta}{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}} \bigg| \frac{1-{\sigma_Z(\boldsymbol{t})}/{\sigma_*}}{({\alpha(1-\alpha)(\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})}/{4}) \sum_{i=1}^{n+1}a_i^{{2}/{(\alpha-1)}} ((t_i-z_i)-(t_{i-1}-z_{i-1}))^2}-1 \bigg|=0,\end{equation}

with $z_0\,:\!=\,0, z_{n+1}\,:\!=\,1$ , and

(B.2) \begin{eqnarray}\lim_{\delta\rightarrow 0}\underset{\left\lvert \boldsymbol{s}-\boldsymbol{z}_0 \right\rvert,\left\lvert \boldsymbol{t}-\boldsymbol{z}_0 \right\rvert <\delta }{\sup\nolimits_{\boldsymbol{s}\neq \boldsymbol{t}, \boldsymbol{s}, \boldsymbol{t}\in\mathcal{S}_{n}}} \bigg\lvert \frac{1-r_Z(\boldsymbol{s},\boldsymbol{t})}{({1}/{2\sigma_*^2})(\sum_{i=1}^{n}\left(a_i^2+a_{i+1}^2\right) \lvert s_i-t_i \rvert^\alpha)}-1 \bigg\rvert=0.\end{eqnarray}

(ii) For $\alpha=1$ and $\mathfrak{m}$ defined in (3.3), if $\mathfrak{m}=n+1$ , $\sigma_Z(\boldsymbol{t})\equiv1,\ \boldsymbol{t} \in \mathcal{S}_{n}$ , and if $\mathfrak{m} < n+1$ , function $\sigma_Z$ attains its maximum equal to 1 on $\mathcal{S}_{n}$ at $\mathcal{M}=\{\boldsymbol{t}\in\mathcal{S}_{n}\colon \sum_{j\in \mathcal{N}}\lvert t_j-t_{j-1} \rvert=1\}$ and satisfies

(B.3) \begin{eqnarray}\lim_{\delta\rightarrow0}\sup\nolimits_{\boldsymbol{z}\in \mathcal{M}}\underset{\left\lvert \boldsymbol{t}-\boldsymbol{z} \right\rvert\leq\delta}{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}}\bigg\lvert \frac{1-\sigma_Z(\boldsymbol{t})}{({1}/{2})\sum_{j\in\mathcal{N}^c}(1-a_j^2)\lvert t_j-t_{j-1} \rvert}-1 \bigg\rvert=0.\end{eqnarray}

In addition, for $1\leq \mathfrak{m}\leq n+1$ , we have

(B.4) \begin{align}\lim_{\delta\rightarrow 0}\underset{\boldsymbol{z}\in \mathcal{M}}{\underset{\left\lvert \boldsymbol{s}-\boldsymbol{z} \right\rvert, \left\lvert \boldsymbol{t}-\boldsymbol{z} \right\rvert<\delta }{\sup\nolimits_{\boldsymbol{s}\neq \boldsymbol{t}, \boldsymbol{s}, \boldsymbol{t}\in\mathcal{S}_{n}}}}\left|\frac{1-r_Z(\boldsymbol{s}, \boldsymbol{t})}{({1}/{2})\sum_{i=1}^{n+1}a_i^2\min\left(\left\lvert t_{i-1}-s_{i-1} \right\rvert+\left\lvert t_i-s_i \right\rvert,\left\lvert t_i-t_{i-1} \right\rvert+\left\lvert s_i-s_{i-1} \right\rvert\right)}-1\right|=0.\end{align}

(iii) For $\alpha\in(1,2)$ , function $\sigma_Z $ attains it maximum on $\mathcal{S}_{n}$ at $\mathfrak{m}$ points $\boldsymbol{z}^{(j)},\ j\in\mathcal{N}=\{i\colon a_i=1,\ i=1,\ldots, n+1\}$ , where $\boldsymbol{z}^{(j)}=(0,\ldots, 0, 1, 1,\ldots, 1)$ (the first 1 stands at the jth coordinate) if $j\in\mathcal{N}$ and $j<n+1$ , and $\boldsymbol{z}^{(n+1)}=(0,\ldots, 0)$ if $n+1\in\mathcal{N}$ . We further have $\sigma_*=1$ and, as $\boldsymbol{t} \to \boldsymbol{z}^{(j)}$ ,

(B.5) \begin{eqnarray}\lim_{\delta\rightarrow0}\underset{\lvert \boldsymbol{t}-\boldsymbol{z}^{(j)} \rvert\leq\delta}{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}}\bigg\lvert \frac{1-\sigma_Z(\boldsymbol{t})}{({1}/{2})(\alpha \lvert t_j-t_{j-1}-1 \rvert- \sum_{1 \le i \le n+1, i\not= j }a_i^2\lvert t_i-t_{i-1} \rvert^\alpha)}-1 \bigg\rvert=0.\end{eqnarray}

Case 1: $\alpha\in(0,1)$ . From Lemma B.1(i), it follows that $\sigma_Z$ on $\mathcal{S}_{n}$ attains its maximum $\sigma_*$ at the unique point $\boldsymbol{z}_0= (z_1,\ldots, z_{n})$ with

\begin{eqnarray*}z_i=\frac{\sum_{j=1}^ia_j^{{2}/{(1-\alpha)}}}{\sum_{j=1}^{n+1}a_j^{{2}/{(1-\alpha)}}},\quad i=1,\ldots, n.\end{eqnarray*}

Moreover, from (B.1) we have, for $\boldsymbol{t}\in\mathcal{S}_{n}$ ,

\begin{align*}&1-\frac{\sigma_Z(\boldsymbol{t})}{\sigma_*}\\&\qquad\sim\frac{\alpha(1-\alpha) (\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})}{4}\\&\hskip30pt\times \bigg(a_1^{{2}/{(\alpha-1)}}(t_1-z_1)^2+a_{n+1}^{{2}/{(\alpha-1)}}(t_n-z_n)^2+\sum_{i=2}^{n}a_i^{{2}/{(\alpha-1)}}\left((t_i-z_i)-(t_{i-1}-z_{i-1})\right)^2\bigg)\end{align*}

as $\left\lvert \boldsymbol{t}-\boldsymbol{z}_0 \right\rvert\rightarrow 0$ and from (B.2), for $\boldsymbol{t}, \boldsymbol{s}\in\mathcal{S}_{n}$ ,

\begin{eqnarray*}1-r_Z(\boldsymbol{s},\boldsymbol{t})\sim\frac{1}{2\sigma_*^2}\left(\sum_{i=1}^{n}(a_i^2+a_{i+1}^2) \left\lvert s_i-t_i \right\rvert^\alpha\right)\end{eqnarray*}

as $\left\lvert \boldsymbol{s}-\boldsymbol{z}_0 \right\rvert,\left\lvert \boldsymbol{t}-\boldsymbol{z}_0 \right\rvert \rightarrow 0$ . Furthermore, we have

(B.6) \begin{eqnarray}\mathbb{E}\left\{(Z^{\alpha}(\boldsymbol{s})-Z^{\alpha}(\boldsymbol{t}))^2\right\}\leq 4\sum_{i=1}^{n}\left\lvert t_i-s_i \right\rvert^\alpha.\end{eqnarray}

Therefore, by [Reference Piterbarg31, Theorem 8.2] we obtain, as $u\rightarrow\infty$ ,

\begin{eqnarray*}\mathbb{P} \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^\alpha(\boldsymbol{t})>u \} \sim ( \mathcal{H}_{B^{\alpha}})^n\prod_{i=1}^n\bigg(\frac{a_i^2+a_{i+1}^2} {2\sigma^2_*}\bigg)^{1/\alpha}\bigg(\frac{u}{\sigma_*}\bigg)^{(2/\alpha-1)n}\int_{\mathbb{R}^n}{\mathrm{e}}^{-f({\bf{x}})}\,{\mathrm{d}}{\bf{x}}\Psi\bigg(\frac{u}{\sigma_*}\bigg),\end{eqnarray*}

where

\begin{align*}f({\bf{x}})&=\frac{\alpha(1-\alpha)(\sum_{i=1}^{n+1} a_i^{{2}/{(1-\alpha)}})}{4}\\&\quad\times\bigg(a_1^{{2}/{(\alpha-1)}}x_1^2+a_{n+1}^{{2}/{(\alpha-1)}}x_n^2+\sum_{i=2}^{n}a_i^{{2}/{(\alpha-1)}}(x_i-x_{i-1})^2\bigg), \quad{\bf{x}}\in\mathbb{R}^n.\end{align*}

A direct calculation demonstrates that

\begin{eqnarray*}\int_{\mathbb{R}^n}{\mathrm{e}}^{-f({\bf{x}})}\,{\mathrm{d}}{\bf{x}}=\left(\frac{4\pi}{\alpha(1-\alpha)} \right)^{{n}/{2}}\sigma_*^{-{n}/{(1-\alpha)}}\left(\sum_{j=1}^{n+1}\prod_{i\neq j}a_i^{{2}/{(\alpha-1)}}\right)^{-{1}/{2}}.\end{eqnarray*}

This completes the proof of this case.

Case 2: $\alpha=1$ . First, we consider the case $\mathfrak{m} < n+1$ . Let $k^*=\max\{i\in\mathcal{N}\}$ and denote

\begin{eqnarray*}\mathcal{N}_0=\{i\in\mathcal{N}, i < k^*\},\qquad\mathcal{N}^c_0=\{i\in\mathcal{N}^c, i < k^*\}.\end{eqnarray*}

To facilitate our analysis, we make the transformation

\begin{eqnarray*}x_i=t_i,\quad i\in\mathcal{N}_0,\qquad x_i=t_i-t_{i-1},\quad i\in\mathcal{N}^c,\end{eqnarray*}

which implies that ${\bf{x}}=(x_1,\ldots, x_{k^*-1},x_{k^*+1},\ldots, x_{n+1})\in[0,1]^n$ and

(B.7) \begin{equation}\displaystyle t_i=t_i({\bf{x}})=\left\{\begin{array}{l@{\quad}l@{\quad}l}x_i& \text{if}\ i\in \mathcal{N}_0,\\ 1-\sum_{j=i+1}^{n+1}x_j& \text{if}\ i\geq k^*,\\\sum_{j=\max\{k\in\mathcal{N}\colon k < i\}}^{i}x_j& \text{if}\ i\in\mathcal{N}^c_0,\end{array}\right.\end{equation}

with the convention that $\max\emptyset=0$ . Define $Y({\bf{x}})=Z(\boldsymbol{t}({\bf{x}}))$ and $\widetilde{\mathcal{S}}_n=\{{\bf{x}}\colon \boldsymbol{t}({\bf{x}})\in\mathcal{S}_n\}$ , with $\boldsymbol{t}({\bf{x}})$ given in (B.7). By Lemma B.1(ii) it follows that $\sigma_{Y}({\bf{x}})$ , the standard deviation of $Y({\bf{x}})$ , attains its maximum equal to 1 at

\begin{align*}\{{\bf{x}}\in\widetilde{\mathcal{S}}_n\colon x_i=0,\ \text{if}\ i\in\mathcal{N}^c\} .\end{align*}

Moreover, let $\widetilde{{\bf{x}}}=(x_i)_{i\in \mathcal{N}_0 }$ , $\bar{{\bf{x}}}=(x_i)_{i\in\mathcal{N}^c}$ and denote, for any $\delta\in (0, {1}/{(n+1)^2})$ ,

\begin{eqnarray*}\widetilde{\mathcal{S}}^*_n(\delta)&\,=\,&\left\{{\bf{x}}\in\widetilde{\mathcal{S}}_n\colon 0\leq x_i\leq \frac{\delta}{(n+1)^2}, \text{if} \ i\in\mathcal{N}^c\right\},\nonumber\\\widetilde{\mathcal{M}}&\,=\,&\{\widetilde{{\bf{x}}}\in[0,1]^{\mathfrak{m}-1}\colon x_i\leq x_j,\quad \text{if}\ i,j\in \mathcal{N}_0\ \text{and}\ i < j\},\nonumber\\\widetilde{\mathcal{M}}(\delta)&\,=\,&\{\widetilde{{\bf{x}}}\in[\delta,1- \delta]^{\mathfrak{m}-1}\colon x_j-x_i\geq \delta,\ \text{if}\ i,j\in \mathcal{N}_0\ \text{and}\ i < j\}\subseteq\widetilde{\mathcal{M}},\\\widetilde{\mathcal{S}}_n(\delta)&\,=\,&\{{\bf{x}}\in\widetilde{\mathcal{S}}^*_n(\delta)\colon \widetilde{{\bf{x}}}\in\widetilde{\mathcal{M}}(\delta)\}.\nonumber\end{eqnarray*}

We note that

(B.8) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n}Y({\bf{x}})>u \right \} \geq \mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \right \} ,\end{eqnarray}

and

(B.9) \begin{align}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n}Y({\bf{x}})>u \right \} &\leq\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n\setminus\widetilde{\mathcal{S}}^*_n(\delta)}Y({\bf{x}})>u \right \} +\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}^*_n(\delta)\setminus\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \right \} \nonumber\\&\hskip10pt+\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \right \} .\end{align}

By applying Theorem 2.1, we derive the asymptotics of $\mathbb{P} \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \} $ as $u\to\infty$ . Subsequently, we demonstrate that the other two terms in (B.9) are asymptotically negligible. We begin with finding the asymptotics of $\mathbb{P} \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \} $ . First, observe

\begin{align*}\widetilde{\mathcal{S}}_n(\delta)=\bigg\{{\bf{x}}\colon \widetilde{{\bf{x}}}\in\widetilde{\mathcal{M}}(\delta),\, 0\leq x_i\leq \frac{\delta}{(n+1)^2}, \text{if} \ i\in\mathcal{N}^c\bigg\},\end{align*}

which is a set satisfying the assumption in Theorem 2.1. Moreover, it follows from (B.3) that

(B.10) \begin{eqnarray}\lim_{\delta\rightarrow 0}\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n(\delta)}\left\lvert \frac{1-\sigma_{Y}({\bf{x}})}{({1}/{2})\sum_{i\in\mathcal{N}^c}(1-a_i^2)x_i}-1 \right\rvert=0.\end{eqnarray}

Taking $\tilde{\boldsymbol{t}}=\widetilde{{\bf{x}}}$ and $\bar{\boldsymbol{t}}_2=\bar{{\bf{x}}}$ in Theorem 2.1, (B.10) implies that (A2) holds with $g_2(\bar{{\bf{x}}})=\tfrac{1}{2}\sum_{i\in\mathcal{N}^c}(1-a_i^2)x_i$ and $p_2(\widetilde{{\bf{x}}})=1$ for $\widetilde{{\bf{x}}}\in\widetilde{\mathcal{S}}^*_n(\delta)$ . We note that $\Lambda_1=\Lambda_3=\emptyset$ in this case.

We next check assumption (A1). To compute the correlation structure, we note that, for ${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$ and $\lvert {\bf{x}}-{\bf{y}} \rvert<{\delta}/{(n+1)^2}$ , if $i\in\mathcal{N}_0$ then

\begin{align*}\left\lvert x_i-y_i \right\rvert+\left\lvert t_{i-1}({\bf{x}})-t_{i-1}({\bf{y}}) \right\rvert< \frac{\delta}{(n+1)^2}+\frac{n\delta}{(n+1)^2}=\frac{\delta}{n+1}\leq\frac{\delta}{2}\end{align*}

and

\begin{eqnarray*}\left\lvert t_i({\bf{x}})-t_{i-1}({\bf{x}}) \right\rvert=\left\{\begin{array}{l@{\quad}l} \left\lvert x_i-x_{i-1} \right\rvert\geq\delta & \text{if}\ i-1\in\mathcal{N}_0, \\[3pt] \displaystyle\left\lvert x_i-\sum_{j=\max\{k\in\mathcal{N}\colon k < i-1\}}^{i-1}x_j \right\rvert\geq\delta-\frac{n\delta}{(n+1)^2} > \frac{\delta}{2}& \text{if}\ i-1\in\mathcal{N}^c,\end{array}\right.\end{eqnarray*}

while, if $i=k^*$ then we have

\begin{eqnarray*}\left\lvert t_{k^*-1}({\bf{y}})-t_{k^*-1}({\bf{x}}) \right\rvert+\left\lvert t_{k^*}({\bf{y}})-t_{k^*}({\bf{x}}) \right\rvert<\frac{n\delta}{(n+1)^2}<\frac{\delta}{2},\end{eqnarray*}

and, for $ k^*-1\in\mathcal{N}_0$ ,

\begin{eqnarray*}\left\lvert t_{k^*}({\bf{x}})-t_{k^*-1}({\bf{x}}) \right\rvert= \left\lvert 1-\sum_{j=k^*+1}^{n+1}x_j-x_{k^*-1} \right\rvert\geq 1-(1-\delta)-\frac{n\delta}{(n+1)^2}> \frac{\delta}{2},\end{eqnarray*}

and, for $ k^*-1\in\mathcal{N}^c$ ,

\begin{eqnarray*}\left\lvert t_{k^*}({\bf{x}})-t_{k^*-1}({\bf{x}}) \right\rvert= \left\lvert 1-\sum_{j=k^*+1}^{n+1}x_j-\sum_{j=\max\{k\in\mathcal{N}\colon k < k^*-1\}}^{k^*-1}x_j \right\rvert\geq 1-(1-\delta)-\frac{n\delta}{(n+1)^2}> \frac{\delta}{2}.\end{eqnarray*}

Hence, for $r_Y({\bf{x}},{\bf{y}})$ , the correlation function of $Y({\bf{x}})$ , we derive from Lemma 2(ii) that, for ${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$ and $\left\lvert {\bf{x}}-{\bf{y}} \right\rvert<{\delta}/{(n+1)^2}$ , as $\delta\rightarrow 0$ ,

(B.11) \begin{align}&1-r_Y({\bf{x}},{\bf{y}})\nonumber\\&\qquad=1-r_Z(\boldsymbol{t}({\bf{x}}),\boldsymbol{t}({\bf{y}}))\nonumber\\&\qquad\sim\frac{1}{2}\sum_{i=1}^{n+1}a_i^2\min\left(\left\lvert t_{i-1}({\bf{y}})-t_{i-1}({\bf{x}}) \right\rvert+\left\lvert t_i({\bf{y}})-t_i({\bf{x}}) \right\rvert,\left\lvert t_i({\bf{y}})-t_{i-1}({\bf{y}}) \right\rvert+\left\lvert t_i({\bf{x}})-t_{i-1}({\bf{x}}) \right\rvert\right)\nonumber\\&\qquad=\frac{1}{2}\sum_{i\in\mathcal{N}}(\left\lvert t_{i-1}({\bf{y}})-t_{i-1}({\bf{x}}) \right\rvert+\left\lvert t_i({\bf{y}})-t_{i}({\bf{x}}) \right\rvert)\nonumber\\&\hskip30pt+\frac{1}{2}\sum_{i\in \mathcal{N}^c}a_i^2\min\left(\left\lvert t_{i-1}({\bf{y}})-t_{i-1}({\bf{x}}) \right\rvert+\left\lvert t_i({\bf{y}})-t_i({\bf{x}}) \right\rvert,\left\lvert t_i({\bf{y}})-t_{i-1}({\bf{y}}) \right\rvert+\left\lvert t_i({\bf{x}})-t_{i-1}({\bf{x}}) \right\rvert\right)\nonumber\\&\qquad=\frac{1}{2}\sum_{i\in\mathcal{N}_0}\left(\left\lvert x_i-y_i \right\rvert+\left\lvert t_{i-1}({\bf{x}})-t_{i-1}({\bf{y}}) \right\rvert\right)\nonumber\\&\hskip30pt+\frac{1}{2}\left\lvert t_{k^*-1}({\bf{x}})-t_{k^*-1}({\bf{y}}) \right\rvert+\frac{1}{2}\left\lvert \sum_{j=k^*+1}^{n+1}(x_j-y_j) \right\rvert\nonumber\\&\hskip30pt+\frac{1}{2}\sum_{i\in\mathcal{N}^c_0}a_i^2\min\left(\left\lvert t_{i-1}({\bf{x}})-t_{i-1}({\bf{y}}) \right\rvert+\left\lvert t_i({\bf{x}})-t_i({\bf{y}}) \right\rvert,x_i+y_i\right)\nonumber\\&\hskip30pt+\frac{1}{2}\sum_{i=k^*+1}^{n+1}a_i^2\min\left(\left\lvert \sum_{j=i}^{n+1}(x_j-y_j) \right\rvert+\left\lvert \sum_{j=i+1}^{n+1}(x_j-y_j) \right\rvert,x_i+y_i\right). \end{align}

By (B.7), we have, for any $i=1,\ldots,n+1$ ,

\begin{eqnarray*}\left\lvert t_i({\bf{y}})-t_i({\bf{x}}) \right\rvert\leq \underset{ i\neq k^*}{\sum_{i=1}^{n+1}}|x_i-y_i|.\end{eqnarray*}

Then, for ${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$ and $|{\bf{x}}-{\bf{y}}|<{\delta}/{(n+1)^2}$ with $\delta>0$ sufficiently small,

\begin{eqnarray*}\frac{1}{2}\sum_{i\in \mathcal{N}_0}|x_i-y_i|\leq 1-r_Y({\bf{x}},{\bf{y}})\leq \mathbb{Q}\underset{ i\neq k^*}{\sum_{i=1}^{n+1}}|x_i-y_i|, \end{eqnarray*}

implying that (2.4) holds.

Recall that

(B.12) \begin{align}W({\bf{x}})&=\frac{\sqrt{2}}{2}\sum_{i\in\mathcal{N}}(B_i(s_i({\bf{x}}))-\widetilde{B}_i(s_{i-1}({\bf{x}})))+\frac{\sqrt{2}}{2}\sum_{i\in\mathcal{N}^c}a_i\left(B_{i}(s_i({\bf{x}}))-B_i(s_{i-1}({\bf{x}}))\right),\end{align}

where $B_i, \widetilde{B}_i$ are i.i.d. standard Brownian motions and

\begin{eqnarray*}s_i({\bf{x}})=\left\{\begin{array}{l@{\quad}l}x_i& \text{if}\ i\in \mathcal{N}_0,\\\displaystyle\sum_{j=\max\{k\in\mathcal{N}\colon k < i\}}^{i}x_j& \text{if}\ i\in\mathcal{N}^c_0,\\\displaystyle\sum_{j=i+1}^{n+1}x_j& \text{if}\ i\geq k^*.\end{array}\right.\end{eqnarray*}

Direct calculation gives us that $\mathbb{E}\{(W({\bf{x}})-W({\bf{y}}))^2\}$ coincides with (B.11) for any ${\bf{x}},{\bf{y}}\in[0,\infty)^n$ . This implies that (2.2) holds with W given in (B.12) and $\boldsymbol{a}(\tilde{x})\equiv 1$ for $\tilde{x}\in\widetilde{\mathcal{M}}(\delta)$ .

Using (B.11) and the fact that, for any $i=1,\ldots,n$ , $s_i({\bf{x}})-s_i({\bf{y}})$ is the absolute value of the combination of $x_j-y_j, \ j\in\{1,\ldots,k^*-1,k^*+1,\ldots, n+1\}$ , we derive that, for a fixed $\bar{{\bf{x}}}$ , the increments of $W({\bf{x}})=W(\widetilde{{\bf{x}}},\bar{{\bf{x}}})$ are homogeneous with respect to $\widetilde{{\bf{x}}}$ . In addition, it is easy to check that (2.5) also holds. Hence, (A1) is satisfied.

Consequently, by Theorem 2.1, as $u\rightarrow\infty$ , we have

(B.13) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \right \} \sim v_{\mathfrak{m}-1}(\widetilde{\mathcal{M}}(\delta))\mathcal{H}_Wu^{2(\mathfrak{m}-1)}\Psi(u),\end{eqnarray}

where

\begin{eqnarray*}\mathcal{H}_W&\,=\,&\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{\mathfrak{m}-1}}\mathbb{E}\left\{\sup\nolimits_{{\bf{x}}\in[0,\lambda]^n}{\mathrm{e}}^{ \sqrt{2}W({\bf{x}})-\sigma_W^2({\bf{x}})-\frac{1}{2}\sum_{j\in\mathcal{N}^c}(1-a_j^2)x_j}\right\}\\&\,=\,&\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{\mathfrak{m}-1}}\mathbb{E}\left\{\sup\nolimits_{{\bf{x}}\in[0,\lambda]^n}{\mathrm{e}}^{ \sqrt{2}W({\bf{x}})-(\underset{i\neq k^*}{\sum_{i=1}^{n+1}}x_i)}\right\}.\end{eqnarray*}

We now proceed to the negligibility of the other two terms in (B.9). In light of the Borell-TIS inequality, we have, as $u\to\infty$ ,

(B.14) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n\setminus\widetilde{\mathcal{S}}^*_n(\delta) }Y({\bf{x}})>u \right \}\leq \exp\left(\frac{(u-\mathbb{E}(\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n\setminus\widetilde{\mathcal{S}}^*_n(\delta) }Y({\bf{x}})))^2}{2(1-\epsilon)^2}\right)=o(\Psi(u)),\end{eqnarray}

where $\varepsilon=1-\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n\setminus\widetilde{\mathcal{S}}^*_n(\delta)}\sigma_Y({\bf{x}}).$ By Slepian’s inequality and Theorem 2.1, we have

(B.15) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}^*_n(\delta)\setminus\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \right \}&\,\leq\,& v_{\mathfrak{m}-1}\left(\widetilde{\mathcal{M}}\setminus\widetilde{\mathcal{M}}(\delta)\right)\widetilde{\mathcal{H}}_{W_1}u^{2(\mathfrak{m}-1)}\Psi(u)\nonumber\\&\,=\,&o(u^{2(\mathfrak{m}-1)}\Psi(u)),\quad u\rightarrow\infty,\ \delta\rightarrow 0.\end{eqnarray}

A combination of the fact that

\begin{eqnarray*}\lim_{\delta\to 0}v_{\mathfrak{m}-1}(\widetilde{\mathcal{M}}(\delta))= v_{\mathfrak{m}-1}(\widetilde{\mathcal{M}})=\frac{1}{(\mathfrak{m}-1)!}\end{eqnarray*}

with (B.8), (B.9), and (B.13)–(B.15) leads to

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z(\boldsymbol{t})>u \right \} =\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n}Y({\bf{x}})>u \right \} \sim \frac{1}{(\mathfrak{m}-1)!}\mathcal{H}_Wu^{2(\mathfrak{m}-1)}\Psi(u),\quad u\rightarrow\infty.\end{eqnarray*}

Case $\mathfrak{m}=n+1$ : for some small $\varepsilon\in(0,1)$ , define $E(\varepsilon)=\{\boldsymbol{t}\in \mathcal{S}_{n}\colon t_i-t_{i-1}\geq \varepsilon,\ i=1,\ldots, n+1\}$ . Thus, we have

(B.16) \begin{align}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E(\varepsilon)}Z(\boldsymbol{t})>u \right \} &\leq\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in \mathcal{S}_{n}}Z(\boldsymbol{t})>u \right \} \nonumber\\&\leq \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in \mathcal{S}_{n}\setminus E(\varepsilon)}Z(\boldsymbol{t})>u \right \} +\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E(\varepsilon)}Z(\boldsymbol{t})>u \right \} .\end{align}

Let us first derive the asymptotics of Z over $E(\varepsilon)$ . For $\boldsymbol{s}, \boldsymbol{t}\in E(\varepsilon)$ , by (B.4) we have

\begin{eqnarray*}1-r(\boldsymbol{s},\boldsymbol{t})\sim\sum_{i=1}^{n}\left\lvert s_i-t_i \right\rvert,\qquad \left\lvert \boldsymbol{t}-\boldsymbol{s} \right\rvert\rightarrow 0.\end{eqnarray*}

Moreover, it follows straightforwardly that ${\mathrm{Var}}(Z(\boldsymbol{t}))=1$ for $\boldsymbol{t}\in E(\varepsilon)$ and $\ \mathrm{corr}(Z(\boldsymbol{t}), Z(\boldsymbol{s}))<1$ for any $\boldsymbol{s}\neq \boldsymbol{t}$ and $\boldsymbol{s},\boldsymbol{t}\in E(\varepsilon)$ . Hence, by [Reference Piterbarg31, Lemma 7.1] we have

(B.17) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E(\varepsilon)}Z(\boldsymbol{t})>u \right \} \sim v_n(E(\varepsilon))u^{2n}\Psi(u)\sim v_n(\mathcal{S}_{n})u^{2n}\Psi(u) , \quad u\rightarrow\infty,\ \varepsilon\rightarrow 0.\end{eqnarray}

Moreover, by Slepian’s inequality and [Reference Piterbarg31, Lemma 7.1], as $u\rightarrow\infty, \varepsilon\rightarrow 0$ ,

(B.18) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in \mathcal{S}_{n}\setminus E(\varepsilon)}Z(\boldsymbol{t})>u \right \}\leq v_n( \mathcal{S}_{n}\setminus E(\varepsilon))(2\mathcal{H}_{B^1}\mathbb{Q}_4)^n u^{2n}\Psi(u)=o\left(u^{2n}\Psi(u)\right)\!.\end{eqnarray}

Inserting (B.17) and (B.18) into (B.16), we obtain

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in \mathcal{S}_{n}}Z(\boldsymbol{t})>u \right \} \sim \frac{1}{n!}u^{2n}\Psi(u),\quad u\rightarrow\infty.\end{eqnarray*}

The claim is established by Remark 3.1(ii).

Case 3: $\alpha\in(1,2)$ . For $\boldsymbol{s},\boldsymbol{t}\in\mathcal{S}_{n}$ , one can easily check that

\begin{eqnarray*}r_Z(\boldsymbol{s},\boldsymbol{t})=\frac{\mathbb{E}\left\{Z^\alpha(\boldsymbol{t})Z^\alpha(\boldsymbol{s})\right\}}{\sigma_Z(\boldsymbol{t})\sigma_Z(\boldsymbol{s})}=\frac{\sum_{i=1}^{n+1}a_i^2\mathbb{E}\left\{(B_i^{\alpha}(t_i)-B_i^{\alpha}(t_{i-1}))(B_i^{\alpha}(s_i)-B_i^{\alpha}(s_{i-1}))\right\}}{\sigma_Z(\boldsymbol{t})\sigma_Z(\boldsymbol{s})}<1\end{eqnarray*}

if $\boldsymbol{s}\neq\boldsymbol{t}$ . In light of Lemma 2(iii), $\sigma_Z$ attains its maximum at $\mathfrak{m}$ distinct points $\boldsymbol{z}^{(j)},j\in\mathcal{N}$ . Consequently, by [Reference Piterbarg31, Corollary 8.2], we have

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^\alpha(\boldsymbol{t})>u \right \}\sim\sum_{j\in \mathcal{N}}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, j}}Z^\alpha(\boldsymbol{t})>u \right \}\!,\quad u\rightarrow\infty,\end{eqnarray*}

where $\Pi_{\delta, j}=\left\{\boldsymbol{t}\in\mathcal{S}_{n}\colon \lvert \boldsymbol{t}-\boldsymbol{z}^{(j)} \right\rvert\leq \tfrac{1}{3}\}.$

Define $E_j(u)\,:\!=\,\{\boldsymbol{t}\in\Pi_{\delta, j}\colon 1-({\ln u}/{u})^2\leq t_j-t_{j-1}\leq 1\}\ni \boldsymbol{z}^j$ . Observe that

\begin{align*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \} &\leq \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, j}}Z^\alpha(\boldsymbol{t})>u \right \} \\&\leq\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \}+\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, j}\setminus E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \}\!.\end{align*}

We first find the exact asymptotics of $\mathbb{P} \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \} $ as $u\to\infty$ . Clearly, for any $u\in\mathbb{R}$ ,

\begin{eqnarray*} \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \} \geq \mathbb{P} \left \{Z^\alpha(\boldsymbol{z}^j)>u \right \} =\Psi(u).\end{eqnarray*}

Moreover, for $\boldsymbol{s},\boldsymbol{t}\in \mathcal{S}_{n} $ , there exists a constant $c>0$ such that $\inf_{\boldsymbol{t}\in\mathcal{S}_{n}}\sigma_Z(\boldsymbol{t})\geq {1}/{\sqrt{2c}}$ . Hence, in light of (B.6) we have

(B.19) \begin{eqnarray}1-r_Z(\boldsymbol{s},\boldsymbol{t})\leq 4c\sum_{i=1}^{n}\left\lvert t_i-s_i \right\rvert^\alpha.\end{eqnarray}

Let $U_2(\boldsymbol{t}), \boldsymbol{t}\in \mathbb{R}^n$ be a centered homogeneous Gaussian field with continuous trajectories, unit variance, and the correlation function $r_{U_2}(\boldsymbol{s},\boldsymbol{t})$ satisfying

\begin{eqnarray*}r_{U_2}(\boldsymbol{s},\boldsymbol{t})=1-\exp\left(8c\sum_{i=1}^{n}\left\lvert t_i-s_i \right\rvert^\alpha\right).\end{eqnarray*}

Set $\widetilde{E}_j(u)=[0,\varepsilon_1 u^{-2/\alpha}]^{j-1}\times[1-\varepsilon_1 u^{-2/\alpha},1]^{n-j+1}$ for some constant $\varepsilon_1\in(0,1)$ . Then it follows that $E_j(u)\subset \widetilde{E}_j(u)$ for sufficiently large u. By Slepian’s inequality and [Reference Piterbarg31, Lemma 6.1],

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \} \leq \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in \widetilde{E}_j(u)}U_2(\boldsymbol{t})>u \right \}\sim \left(\mathcal{H}_{B^{\alpha}}[0,(8c)^{1/\alpha}\varepsilon_1]\right)^{n}\Psi(u)\sim \Psi(u)\end{eqnarray*}

as $u\rightarrow\infty, \varepsilon_1\rightarrow 0$ , where

\begin{align*}\lim_{\lambda\rightarrow0}\mathcal{H}_{B^{\alpha}}[0, \lambda]=\lim_{\lambda\rightarrow0}\mathbb{E}\left\{\sup _{t \in[0, \lambda]} {\mathrm{e}}^{\sqrt{2} B^{\alpha}(t)-t^{\alpha}}\right\}=1.\end{align*}

Consequently,

(B.20) \begin{eqnarray}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \} \sim \Psi(u), \quad u\to\infty.\end{eqnarray}

Note that, for $\boldsymbol{t}\in\mathcal{S}_{n}$ ,

\begin{align*}\underset{i\neq j}{\sum_{i=1}^{n+1}}a_i^2\left\lvert t_i-t_{i-1} \right\rvert^{\alpha}\leq \left\lvert t_j-t_{j-1}-1 \right\rvert.\end{align*}

Hence, by (B.5), for sufficiently large u,

(B.21) \begin{eqnarray}\sup\nolimits_{\boldsymbol{t}\in \Pi_{\delta, j}\setminus E_j(u) }\sigma_Z(\boldsymbol{t})&\,\leq\,&\sup\nolimits_{\boldsymbol{t}\in \Pi_{\delta, j}\setminus E_j(u) }\left( 1-\frac{(1-\varepsilon)(\alpha-1)}{2}\left\lvert t_j-t_{j-1}-1 \right\rvert\right)\nonumber\\&\,\leq\,&1-\frac{(1-\varepsilon)(\alpha-1)}{2}\left(\frac{\ln u}{u}\right)^2,\end{eqnarray}

where $\varepsilon\in(0,1)$ is a constant. In light of (B.19) and (B.21), by [Reference Piterbarg31, Theorem 8.1] we have, for sufficiently large u,

\begin{align*}&\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, j}\setminus E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \}\\&\qquad \leq\mathbb{Q}_9u^{2n/\alpha}\Psi\left(\frac{u}{1-({(1-\varepsilon)(\alpha-1)}/{2})({\ln u}/{u})^2}\right)=o\left(\Psi\left(u\right)\right)\!,\quad u\rightarrow\infty,\end{align*}

which combined with (B.20) leads to

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, i}}Z^\alpha(\boldsymbol{t})>u \right \} \sim \mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \right \} \sim \Psi(u), \quad u\to\infty.\end{eqnarray*}

Consequently, with $\mathfrak{m}=\#\mathcal{N}$ given in (3.3), we obtain

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\mathcal{S}_{n}}Z^\alpha(\boldsymbol{t})>u \right \} \sim\sum_{j\in \mathcal{N}}\mathbb{P} \left \{\sup\nolimits_{\boldsymbol{t}\in\Pi_{\delta, j}}Z^\alpha(\boldsymbol{t})>u \right \} \sim \mathfrak{m}\Psi(u), \quad u\rightarrow\infty.\end{eqnarray*}

This completes the proof.

Appendix C. Proof of Remark 3.1

(i) For the $1\leq \mathfrak{m}\leq n$ case, we first show that $\mathcal{H}_W\geq 1$ . Recall that $\mathcal{N}_0=\{i\in\mathcal{N}, i < k^*\}$ , $\mathcal{N}^c=\{i\colon a_i<1,\ i=1,\ldots, n+1\}$ and $\widetilde{{\bf{x}}}=(x_i)_{i\in \mathcal{N}_0 }$ .

For $x_i=0, i\in \mathcal{N}^c$ , by the definition of W in (3.4), we have

\begin{align*}\left\{\sqrt{2}W({\bf{x}})-\sum_{i=1, i\neq k^*}^{n+1}x_i,\widetilde{{\bf{x}}}\in[0,\lambda]^{\mathfrak{m}-1}\right\}\buildrel d \over = \left\{\sum_{i\in \mathcal{N}_0}\sqrt{2}B_i(x_i)-\sum_{i\in\mathcal{N}_0}x_i, \widetilde{{\bf{x}}}\in[0,\lambda]^{\mathfrak{m}-1}\right\}.\end{align*}

Hence,

\begin{eqnarray*}\mathcal{H}_W\geq\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{\mathfrak{m}-1}}\mathbb{E}\left\{\sup\nolimits_{\widetilde{{\bf{x}}}\in[0,\lambda]^{\mathfrak{m}-1}} {\mathrm{e}}^{\sum_{i\in \mathcal{N}_0}\sqrt{2}B_i(x_i)-\sum_{i\in\mathcal{N}_0}x_i}\right\}=\prod_{i\in \mathcal{N}_0}\mathcal{H}_{B_i},\end{eqnarray*}

where $H_{B_i}$ is defined in (2.11). Note that $\mathcal{H}_{B_i}=1$ ; see e.g. [Reference Piterbarg31] (or [Reference Long, Dȩbicki, Hashorva and Luo4]). Therefore, $\mathcal{H}_W\geq 1.$ We next derive the upper bound of $\mathcal{H}_W$ for $1\leq \mathfrak{m}\leq n$ . We use the notation introduced in the proof of Proposition 3.1(ii) (specifically, Y and $\widetilde{\mathcal{S}}_n(\delta)$ ). For $\delta\in (0, {1}/{(n+1)^2})$ , let

\begin{align*}A(\delta)=\bigg\{{\bf{x}}\colon \widetilde{{\bf{x}}}\in B(\delta),0\leq x_i\leq \frac{\delta}{(n+1)^2}, \text{ if} \ i\in\mathcal{N}^c\bigg\},\end{align*}

where $B(\delta)=\prod_{i=1}^{m-1}[2i\delta, (2i+1)\delta]$ . Clearly, $A(\delta)\subset\widetilde{\mathcal{S}}_n(\delta)$ . Moreover, by (B.11) it follows that, for any $\epsilon>0$ , there exists $\delta \in (0, {1}/{(n+1)^2})$ such that, for any ${\bf{x}},{\bf{y}}\in A(\delta)$ ,

\begin{eqnarray*}1-r_Y({\bf{x}},{\bf{y}})\leq (n+\epsilon)\underset{i\neq k^*}{\sum_{i=1}^{n+1}}\left\lvert x_i-y_i \right\rvert.\end{eqnarray*}

Let us introduce a centered homogeneous Gaussian field $U_4({\bf{x}})$ , ${\bf{x}}\in[0,\infty)^{n}$ with continuous trajectories, unit variance, and the correlation function

\begin{align*}r_{U_4}({\bf{x}},{\bf{y}})=\exp\left(-\mathbb{E}\left\{\left(W_4({\bf{x}})- W_4({\bf{y}})\right)^2\right\}\right), \quad\text{with}\,W_4({\bf{x}})=\sqrt{n+\epsilon}\underset{i\neq k^*}{\sum_{i=1}^{n+1}} B_i(x_i),\end{align*}

where $B_i,\ i=1,\dots, k^*-1, k^*+1, n+1$ are i.i.d. standard Brownian motions. By (B.10) and Slepian’s inequality, we have, for $0<\epsilon<1$ ,

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in A(\delta)}\frac{U_4({\bf{x}})}{1+\sum_{i\in\mathcal{N}^c} ({(1-a_i^2)}/{(2-\epsilon)})x_i}>u \right \} \geq \mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in A(\delta)} Y({\bf{x}})>u \right \} .\end{eqnarray*}

Analogously to (B.13), we have

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in A(\delta)}Y({\bf{x}})>u \right \} \sim v_{\mathfrak{m}-1}\left(B(\delta)\right)\mathcal{H}_Wu^{2(\mathfrak{m}-1)}\Psi(u)\end{eqnarray*}

and

\begin{eqnarray*}\mathbb{P} \left \{\sup\nolimits_{{\bf{x}}\in A(\delta)}\frac{U_4({\bf{x}})}{1+\sum_{i\in\mathcal{N}^c} ({(1-a_i^2)}/{(2+\epsilon)})x_i}>u \right \} \sim v_{\mathfrak{m}-1}\left(B(\delta)\right)\mathcal{H}_{W_4} u^{2(\mathfrak{m}-1)}\Psi(u),\end{eqnarray*}

where

\begin{align*}\mathcal{H}_{W_4}&=\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{\mathfrak{m}-1}}\mathbb{E}\left\{\sup\nolimits_{{\bf{x}}\in[0,\lambda]^n}{\mathrm{e}}^{ \sqrt{2(n+\epsilon)}\underset{i\neq k^*}{\sum_{i=1}^{n+1}}B_i(x_i)-(n+\epsilon)\underset{i\neq k^*}{\sum_{i=1}^{n+1}}x_i-\sum_{i\in\mathcal{N}^c}({(1-a_i^2)}/{(2-\epsilon)})x_i}\right\}\\&=(n+\epsilon)^{\mathfrak{m}-1}\left(\prod_{i\in \mathcal{N}_0}\mathcal{H}_{B_i}\right)\prod_{i\in \mathcal{N}^c} \mathcal{P}_{B_i}^{{(1-a_i^2)}/{(2-\epsilon)(n+\epsilon)}},\end{align*}

with $\mathcal{P}_{B_i}^c$ for $c>0$ being defined in (2.11). Using the fact that $\mathcal{H}_{B_i}=1$ and, for $c>0$ , $\mathcal{P}_{B_i}^c=1+{1}/{c}$ (see, e.g., [Reference Long, Dȩbicki, Hashorva and Luo4]), we have

\begin{align*}\mathcal{H}_{W_4}=(n+\epsilon)^{\mathfrak{m}-1}\prod_{i\in\mathcal{N}^c}\left(1+\frac{(2+\epsilon)(n+\epsilon)}{1-a_i^2}\right)\!.\end{align*}

Hence,

\begin{align*}\mathcal{H}_{W}\leq \mathcal{H}_{W_4}=(n+\epsilon)^{\mathfrak{m}-1}\prod_{i\in\mathcal{N}^c}\left(1+\frac{(2+\epsilon)(n+\epsilon)}{1-a_i^2}\right)\!.\end{align*}

We establish the claim by letting $\epsilon\to 0$ .

(ii) If $\mathfrak{m}=n+1$ , we have $\mathcal{N}_0=\{1,\dots, n\}$ and

\begin{eqnarray*}\mathcal{H}_W=\lim_{\lambda\rightarrow\infty}\frac{1}{\lambda^{n}}\mathbb{E}\left\{\sup\nolimits_{\widetilde{{\bf{x}}}\in[0,\lambda]^{n}}{\mathrm{e}}^{\sum_{i\in \mathcal{N}_0}\sqrt{2}B_i(x_i)-\sum_{i\in\mathcal{N}_0}x_i}\right\}=\prod_{i\in \mathcal{N}_0}\mathcal{H}_{B_i}=1.\end{eqnarray*}

This completes the proof.

Appendix D. Proof of Proposition 3.2

Let us recall that by (3.12)

\begin{align*}\mathbb{P}\left(\sup\nolimits_{t\in [0,1]}\chi(t)>u\right)=\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)>u\right)\!,\end{align*}

with $Z(\boldsymbol{\theta},t)$ defined in (3.11).

Observe that, for $0<\epsilon<\pi/4$ ,

(D.1) \begin{align}\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{1,\epsilon}}Z(\boldsymbol{\theta},t)>u\right)&\leq \mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)>u\right)\nonumber\\&\leq \sum_{i=1}^{3}\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{i,\epsilon}}Z(\boldsymbol{\theta},t)>u\right)\!,\end{align}

where

\begin{gather*}E_{1,\epsilon}=[\epsilon,\pi-\epsilon]^{n-2}\times [0,2\pi-\epsilon)\times[0,\epsilon],\\E_{2,\epsilon}=[0,\pi]^{n-2}\times [0,2\pi)\times[\epsilon, 1],\\E_{3,\epsilon}=E/(E_{1,\epsilon}\cup E_{2,\epsilon}).\end{gather*}

In the rest of the proof, we apply Theorem 2.1 to obtain the asymptotics over $E_{1,\epsilon}$ . Then, using the Borell-TIS inequality and Slepian’s inequality respectively, we find tight upper bounds of the exceedance probabilities over $E_{2,\epsilon}$ and $E_{3,\epsilon}$ . Finally, we combine all the obtained results to show the asymptotics over the whole set.

The asymptotics over $E_{1,\epsilon}$ . To this end, we analyze the variance and correlation of Z. By (3.7), we have

(D.2) \begin{align}\sigma_Z(\boldsymbol{\theta},t)=\frac{1}{1+bt^{\alpha}}, \quad t\in [0,1].\end{align}

Hence, $\sigma_Z(\boldsymbol{\theta},t)$ attains its maximum equal to 1 at $[0,\pi]^{n-2}\times[0,2\pi)\times\{0\}$ and

\begin{align*}\lim_{\delta\downarrow 0}\sup\nolimits_{\boldsymbol{\theta}\in [0,\pi]^{n-2}\times[0,2\pi), 0 < t < \delta }\left|\frac{1-\sigma_Z(\boldsymbol{\theta},t)}{bt^{\alpha}}-1\right|=1.\end{align*}

This implies that assumption (A2) is satisfied. For assumption (A1), by (3.8), we have

\begin{align*}&1-\ \mathrm{corr}(Z(\boldsymbol{\theta},t), Z(\boldsymbol{\theta}',t'))\\&\qquad \sim a{\mathrm{Var}}(Y(t)-Y(t'))+\frac{1}{2}\sum_{i=1}^{n}(v_i(\boldsymbol{\theta})-v_i(\boldsymbol{\theta}'))^2\\& \qquad \sim a{\mathrm{Var}}(Y(t)-Y(t'))+\frac{(\theta_1-\theta_1')^2}{2}+\frac{1}{2}\sum_{i=2}^{n-1}\left(\prod_{j=1}^{i-1}\sin(\theta_j)\right)^2 (\theta_i-\theta_i')^2\end{align*}

as $(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E$ and $|t-t'|, |\boldsymbol{\theta}-\boldsymbol{\theta}'|\to 0$ . Let

(D.3) \begin{align}W(\boldsymbol{\theta}, t)=\sum_{i=1}^{n-1}B_i^2(\theta_i)+\sqrt{a}Y(t), \quad \boldsymbol{\theta}\in \mathbb{R}^{n-1}\times \mathbb{R}^+,\end{align}

where $B_i^2$ are independent fractional Brownian motions with index 2 and Y is a self-similar Gaussian process, as defined in (3.8), that is independent of $B_i^2$ . Moreover, let $\boldsymbol{a}(\boldsymbol{\varphi})=(a_1(\boldsymbol{\varphi}),\dots, a_{n-1}(\boldsymbol{\varphi})),\boldsymbol{\varphi}\in[0,\pi]^{n-2}\times [0,2\pi)$ with

\begin{align*}a_1(\boldsymbol{\varphi})=\frac{1}{\sqrt{2}} \quad\text{and}\quad a_i(\boldsymbol{\varphi})=\frac{1}{\sqrt{2}}\prod_{j=1}^{i-1}\sin(\varphi_j), \quad i=2,\dots,n-1.\end{align*}

It follows that, for $0<\epsilon<\pi/4$ ,

\begin{align*}\lim_{\delta\downarrow 0}\underset{(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E, |(\boldsymbol{\theta},t)-(\boldsymbol{\varphi},0)|, |(\boldsymbol{\theta}',t')-(\boldsymbol{\varphi},0)|<\delta}{\sup\nolimits_{\boldsymbol{\varphi}\in [\epsilon,\pi-\epsilon]^{n-2}\times [0,2\pi)}}\left|\frac{1-\ \mathrm{corr}(Z(\boldsymbol{\theta},t), Z(\boldsymbol{\theta}',t'))}{\mathbb{E}\left\{\left(W(\boldsymbol{a}(\boldsymbol{\varphi})\boldsymbol{\theta},t)-W(\boldsymbol{a}(\boldsymbol{\varphi})\boldsymbol{\theta}',t')\right)^2\right\}}-1\right|=0.\end{align*}

By the fact that

(D.4) \begin{align}{\mathrm{Var}}(W(\boldsymbol{\theta},t)-W(\boldsymbol{\theta}',t'))=a{\mathrm{Var}}(Y(t)-Y(t'))+\sum_{i=1}^{n-1}(\theta_i-\theta_i')^2,\end{align}

we know that $W(\boldsymbol{\theta},t)$ is homogeneous with respect to $\boldsymbol{\theta}$ if t is fixed. This implies that (2.2) holds with W defined in (D.3).

Moreover, by self-similarity of Y and (D.4) we have

\begin{align*}{\mathrm{Var}}(W(u^{-1}\boldsymbol{\theta},u^{-2/\alpha}t)-W(u^{-1}\boldsymbol{\theta}', u^{-2/\alpha}t'))=u^{-2}{\mathrm{Var}}(W(\boldsymbol{\theta},t)-W(\boldsymbol{\theta}',t')),\end{align*}

showing that (2.3) holds with $\alpha_i=2, i=1,\dots, n-1$ , and $\alpha_n=\alpha$ . In addition, by (B1) and (B2), there exists $d>0$ such that, for $|\boldsymbol{\theta},t)-(\boldsymbol{\theta}',t')|<\delta$ with $(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E_{1,\epsilon}$ ,

\begin{align*}\mathbb{Q}_1\sum_{i=1}^{n-1}(\theta_i-\theta_i')^2\leq 1-\ \mathrm{corr}(Z(\boldsymbol{\theta},t)\leq \mathbb{Q}_2\left(|t-t'|^{\alpha}+ \sum_{i=1}^{n-1}(\theta_i-\theta_i')^2\right).\end{align*}

Hence, (2.4) is confirmed. Moreover, (2.5) is clearly satisfied over $E_{1,\epsilon}$ . Therefore, (A1) is verified for Z over $E_{1,\epsilon}$ . Note that, for Z over $E_{1,\epsilon}$ , we are in the case of $\Lambda_0=\{1,\dots, n-1\}$ , $\Lambda_1=\emptyset$ , $\Lambda_2=\{n\}$ , and $\Lambda_3=\emptyset$ of Theorem 2.1. Consequently, it follows from Theorem 2.1 that, as $u\to\infty$ ,

\begin{align*}&\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{1,\epsilon}}Z(\boldsymbol{\theta},t)>u\right)\\&\qquad \sim\mathcal{H}_W^{bt^{\alpha}}\int_{\boldsymbol{\theta}\in [\epsilon,\pi-\epsilon]^{n-2}\times [0,2\pi-\epsilon]} \prod_{i\in \Lambda_0} |a_i(\boldsymbol{\theta})| \,{\mathrm{d}} \boldsymbol{\theta} u^{\sum_{i\in\Lambda_0}{2}/{\alpha_i}}\Psi(u)\\&\qquad =\mathcal{H}_W^{bt^{\alpha}}\int_{\boldsymbol{\theta}\in [\epsilon,\pi-\epsilon]^{n-2}\times [0,2\pi-\epsilon]} 2^{-(n-1)/2}\prod_{i=1}^{n-1}|\sin(\theta_i)|^{n-i-1}\,{\mathrm{d}} \theta_1\dots \,{\mathrm{d}}\theta_{n-1}u^{n-1}\Psi(u),\end{align*}

where W is given in (D.3).

Upper bound for the asymptotics over $E_{2,\epsilon}$ . By (D.2), there exists $0<\delta<1$ such that

\begin{align*}\sup\nolimits_{(\boldsymbol{\theta},t)\in E_{2,\epsilon}}{\mathrm{Var}}(Z(\boldsymbol{\theta},t))\leq 1-\delta.\end{align*}

It follows from the Borell-TIS inequality that, as $u\to\infty$ ,

\begin{align*}\mathbb{P}(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{2,\epsilon}}Z(\boldsymbol{\theta},t)>u)\leq \exp\bigg(-\frac{(u-\mathbb{E}\{\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{2,\epsilon}}Z(\boldsymbol{\theta},t)\})^2}{2(1-\delta)}\bigg)=o(u^{n-1}\Psi(u)).\end{align*}

Upper bound for the asymptotics over $E_{3,\epsilon}$ . Direct calculation shows that

\begin{align*}1-\ \mathrm{corr}(Z(\boldsymbol{\theta},t)\leq \mathbb{Q}_2\left(|t-t'|^{\alpha}+ \sum_{i=1}^{n-1}(\theta_i-\theta_i')^2\right)\end{align*}

holds for $(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E_{3,\epsilon}$ . Define $U_3(\boldsymbol{\theta},t), (\boldsymbol{\theta},t)\in \mathbb{R}^n$ to be a centered homogeneous Gaussian field with continuous trajectories, unit variance, and the correlation function $r_{U_3}(\boldsymbol{\theta},t, \boldsymbol{\theta}',t')$ satisfying

\begin{eqnarray*}r_{U_3}(\boldsymbol{\theta},t, \boldsymbol{\theta}',t')=1-\exp\left(-2\mathbb{Q}_2\left(|t-t'|^{\alpha}+ \sum_{i=1}^{n-1}(\theta_i-\theta_i')^2\right)\right).\end{eqnarray*}

By Slepian’s inequality and Theorem 8.2 in [Reference Piterbarg31], we have

\begin{align*}\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{3,\epsilon}}Z(\boldsymbol{\theta},t)>u\right)& \leq \mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E_{3,\epsilon}}\frac{U_3(\boldsymbol{\theta},t)}{1+bt^{\alpha}}>u\right)\\&\leq \mathbb{Q}v_n(E_{3,\epsilon})u^{n-1}\Psi(u), \quad u\to\infty.\end{align*}

Noting that $\lim_{\epsilon\to 0}v_n(E_{3,\epsilon})=0$ , the combination of the above asymptotics and upper bounds leads to

\begin{align*}&\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)>u\right)\\&\quad \sim \mathcal{H}_W^{bt^{\alpha}}\int_{\boldsymbol{\theta}\in [0,\pi]^{n-2}\times [0,2\pi)} 2^{-(n-1)/2}\prod_{i=1}^{n-1}|\sin(\theta_i)|^{n-i-1}\,{\mathrm{d}} \theta_1\dots \,{\mathrm{d}}\theta_{n-1}u^{n-1}\Psi(u), \quad u\to\infty.\end{align*}

By the fact that

\begin{align*}\int_{\boldsymbol{\theta}\in [0,\pi]^{n-2}\times [0,2\pi)} \prod_{i=1}^{n-1}|\sin(\theta_i)|^{n-i-1}\,{\mathrm{d}} \theta_1\dots \,{\mathrm{d}}\theta_{n-1}=\frac{2\pi^{n/2}}{\Gamma(n/2)},\end{align*}

and $\mathcal{H}_W^{bt^{\alpha}}=\mathcal{P}_{\sqrt{a}Y}^{b}(\mathcal{H}_{B^2})^{n-1}=\mathcal{P}_{Y}^{a^{-1}b}\pi^{-(n-1)/2},$ where we used the fact that $\mathcal{H}_{B^2}=\pi^{-1/2},$ we have

\begin{align*}\mathbb{P}\left(\sup\nolimits_{(\boldsymbol{\theta}, t)\in E}Z(\boldsymbol{\theta},t)>u\right)\sim\frac{2^{{(3-n)}/{2}}\sqrt{\pi}}{\Gamma(n/2)}\mathcal{P}_{Y}^{a^{-1}b}u^{n-1}\Psi(u), \quad u\to\infty.\end{align*}

Acknowledgements

We sincerely appreciate the anonymous reviewer’s comments, which significantly improved the presentation of the results in this contribution. We also thank Enkelejd Hashorva for his stimulating comments that enhanced the content of this paper. We thank Lanpeng Ji for some useful discussions.

Funding Information

Support from SNSF Grant 200021-175752/1 is kindly acknowledged. Krzysztof Dȩbicki was partially supported by NCN Grant No. 2018/31/B/ST1/00370 (2019-2024). Long Bai is supported by the National Natural Science Foundation of China Grant No. 11901469, Natural Science Foundation of the Jiangsu Higher Education Institutions of China Grant No. 19KJB110022, and University Research Development Fund No. RDF2102071. Peng Liu is the co-corresponding author.

Competing Interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Adler, R. and Brown, L. (1986). Tail behavior for suprema of empirical processes. Ann. Probab. 14, 130.CrossRefGoogle Scholar
Adler, R. and Taylor, J. (2007). Random Fields and Geometry . Springer Monographs in Mathematics. Springer, New York.Google Scholar
Azaïs, J. and Wschebor, M. (2009). Level Sets and Extrema of Random Processes and Fields. John Wiley & Sons, Hoboken, NJ.CrossRefGoogle Scholar
Bai, L. and Kalaj, D. (2021). Approximation of Kolmogorov-Smirnov test statistics. Stochastics 93, 9931027.CrossRefGoogle Scholar
Baryshnikov, Y. (2001). GUEs and queues. Probab. Theory Relat. Fields 119, 256274.CrossRefGoogle Scholar
Berman, S. (1992). Sojourns and Extremes of Stochastic Processes. Chapman and Hall/CRC, New York.Google Scholar
Bojdecki, T., Gorostiza, L. and Talarczyk, A. (2004). Sub-fractional Brownian motion and its relation to occupation times. Statist. Probab. Lett. 69, 405419.CrossRefGoogle Scholar
Chan, H. and Lai, T. (2006). Maxima of asymptotically Gaussian random fields and moderate deviation approximations to boundary crossing probabilities of sums of random variables with multidimensional indices. Ann. Probab. 34, 80121.CrossRefGoogle Scholar
Cheng, D. and Liu, P. (2019). Extremes of spherical fractional Brownian motion. Extremes 22, 433457.CrossRefGoogle Scholar
Dębicki, K., Hashorva, E. and Ji, L. (2016). Extremes of a class of nonhomogeneous Gaussian random fields. Ann. Probab. 44, 9841012.CrossRefGoogle Scholar
Dębicki, K., Hashorva, E. and Liu, P. (2017). Uniform tail approximation of homogenous functionals of Gaussian fields. Adv. Appl. Probab. 49, 10371066.CrossRefGoogle Scholar
Dębicki, K. and Tabiś, K. (2020). Pickands-Piterbarg constants for self-similar Gaussian processes. Probab. Math. Statist. 40, 297315.Google Scholar
Dzhaparidze, K. and Zanten, H. (2004). A series expansion of fractional Brownian motion. Probab. Theory Relat. Fields 130, 3955.CrossRefGoogle Scholar
Fatalov, V. (1993). Asymptotics of large deviation probabilities for Gaussian fields: Applications. Izvestiya Natsionalnoi Akademii Nauk Armenii 28, 2551.Google Scholar
Glynn, P. and Whitt, W. (1991). Departures from many queues in series. Ann. Appl. Probab. 546572.CrossRefGoogle Scholar
Grabiner, D. (1999). Brownian motion in a Weyl chamber, non-colliding particles, and random matrices. Annales de l’IHP Probabilités et Statistiques 35, 177204.Google Scholar
Gravner, J., Tracy, C. and Widom, H. (2001). Limit theorems for height fluctuations in a class of discrete space and time growth models. J. Stat. Phys. 102, 10851132.CrossRefGoogle Scholar
Hashorva, E. and Ji, L. (2015). Piterbarg theorems for chi-processes with trend. Extremes 18, 3764.CrossRefGoogle Scholar
Houdré, C. and Villa, J. (2003). An example of infinite dimensional quasi-helix. In Stochastic Models (Mexico City, 2002), Contemporary Mathematics, Vol. 336. American Mathematical Society, pp. 195202.CrossRefGoogle Scholar
Ledoux, M. (1996). Isoperimetry and Gaussian Analysis. Springer Berlin Heidelberg, Berlin, Heidelberg.CrossRefGoogle Scholar
Lei, P. and Nualart, D. (2009). A decomposition of the bifractional Brownian motion and some applications. Statist. Probab. Lett. 79, 619624.CrossRefGoogle Scholar
Li, W. and Shao, Q. (2004). Lower tail probabilities for Gaussian processes. Ann. Probab. 32, 216242.CrossRefGoogle Scholar
Lifshits, M. (2013). Gaussian Random Functions, Vol. 322. Springer, Dordrecht.Google Scholar
Lindgren, G. (1980). Extreme values and crossing for the chi-square processes and other functions of multidimensional Gaussian process, with reliability applications. Adv. Appl. Probab. 12, 746774.Google Scholar
Liu, P. (2016). Extremes of Gaussian random fields with maximum variance attained over smooth curves. arXiv preprint arXiv:1612.07780.Google Scholar
Liu, P. and Ji, L. (2016). Extremes of chi-square processes with trend. Probab. Math. Statist. 36, 120.Google Scholar
Liu, P. and Ji, L. (2017). Extremes of locally stationary chi-square processes with trend. Stoch. Process. Appl. 127, 497525.CrossRefGoogle Scholar
Long, B., Dȩbicki, K., Hashorva, E. and Luo, L. (2018). On generalised Piterbarg constants. Methodol. Comput. Appl. Probab. 20, 137164.CrossRefGoogle Scholar
O’Connell, N. (2002). Random matrices, non-colliding processes and queues. Séminaire de probabilités de Strasbourg 36, 165182.Google Scholar
Piterbarg, V. (1994). High excursions for nonstationary generalized chi-square processes. Stoch. Process. Appl. 53, 307337.CrossRefGoogle Scholar
Piterbarg, V. (1996). Asymptotic Methods in the Theory of Gaussian Processes and Fields , Translations of Mathematical Monographs, Vol. 148. American Mathematical Society, Providence, RI.Google Scholar
Piterbarg, V. (2024). High excursion probabilities for Gaussian fields on smooth manifolds. Theory Prob. Appl. 69, 294312.CrossRefGoogle Scholar
Piterbarg, V. and Prisyazhnyuk, V. (1981). The exact asymptotics for the probability of large span of a Gaussian stationary process. Teoriya Veroyatnostei i ee Primeneniya 26, 480495.Google Scholar
Talagrand, M. (2005). The Generic Chaining: Upper and Lower Bounds of Stochastic Processes. Springer Science & Business Media, Berlin Heidelberg.Google Scholar