1. Introduction
Let
$X({\boldsymbol{t}}),\ {\boldsymbol{t}}\in \mathbb{R}^n$
,
$n>1$
be a centered Gaussian field with continuous sample paths. Due to its significance in the extreme value theory of stochastic processes, statistics, and applied probability, the distributional properties of

with a bounded set
$\mathcal{A}\subset \mathbb{R}^n$
, were extensively investigated. While the exact distribution of (1.1) is known only for certain specific processes, the asymptotics of

as
$u\to\infty$
was intensively analyzed; see, e.g., monographs by Adler & Taylor [Reference Adler and Taylor2], Azaïs & Wschebor [Reference Azaïs and Wschebor3], Berman [Reference Berman7], Ledoux [Reference Ledoux21], Lifshits [Reference Lifshits24], Piterbarg [Reference Piterbarg31], Talagrand [Reference Talagrand34], and references therein. As advocated therein, the set of points that maximize the variance
$\mathcal{M}^\star\,:\!=\,\arg \max_{\boldsymbol{t}\in {\mathcal{A}}} {\mathrm{Var}}(X(\boldsymbol{t}))$
plays a crucial role in determining the exact asymptotics of (1.2). The best-understood cases involve situations where (i)
$v_n(\mathcal{M}^\star) \in (0, \infty)$
, with
$v_n$
representing the Lebesgue measure on
$\mathbb{R}^n$
, and the field
$X(\boldsymbol{t})$
is homogeneous on
$\mathcal{M}^\star$
, or (ii) the set
$\mathcal{M}^\star$
consists of distinct points. In case (i), one can argue that

For an intuitive description of case (ii), suppose that
$\mathcal{M}^\star=\{\boldsymbol{t}^\star\}$
and
${\mathrm{Var}}(X(\boldsymbol{t}^\star))=1$
. Then, the interplay between the local behavior of the standard deviation and the correlation function in the vicinity of
$\mathcal{M}^\star$
affects the asymptotics, which takes the form

where f(u) is some power function. An applicable assumption for obtaining the exact asymptotics as described in (1.3) is that, in the neighborhood of
$\boldsymbol{t}^\star$
, both the standard deviation and the correlation function of
$X(\boldsymbol{t})$
factorize according to the additive form

as
$\boldsymbol{s},\boldsymbol{t} \to \boldsymbol{t}^\star$
, where the coordinates of
$\mathbb{R}^n$
are split into disjoint sets
$\Lambda_1,\Lambda_2,\Lambda_3$
with
$\Lambda_1\cup\Lambda_2\cup\Lambda_3=\{1,\ldots ,n\}$
,
$\bar{\boldsymbol{t}}_j=(t_{i})_{i\in \Lambda_j},$
$j=1,2,3$
for
$\boldsymbol{t} \in \mathbb{R}^n$
and
$g_j,h_j$
are some homogeneous functions (see (2.7)) such that

Under conditions (1.4)–(1.5), the function f introduced in (1.3) can be factorized as

where
$f_i$
corresponds to
$\Lambda_i$
and we have the following.
-
• In the direction of the coordinates
$\Lambda_1$ , the standard deviation function is relatively flat compared with the correlation function. Then, for the coordinates
$\Lambda_1$ , a substantial neighborhood of
$\mathcal{M}^*$ contributes to the asymptotics, and
$f_1(u) \to \infty$ as
$u \to \infty$ .
-
• In the direction of the coordinates
$\Lambda_2$ , the standard deviation function is comparable to the correlation function. Then, with respect of the coordinates
$\Lambda_2$ , some relatively small neighborhood of
$\mathcal{M}^*$ is important for the asymptotics, and
$f_2(u)\to \mathcal{P}\in (1,\infty)$ as
$u\to\infty$ .
-
• In the direction of the coordinates
$\Lambda_3$ , the standard deviation function decreases relatively fast compared with the correlation function. Then, for the coordinates
$\Lambda_3$ , only the sole optimizer
$t^\star$ is responsible for the asymptotics, and
$f_3(u)\to 1$ as
$u\to\infty$ . We refer the reader to Piterbarg [Reference Piterbarg31, Chapter 8] for more details.
Much less is known about the mixed cases when the set
$\mathcal{M}^\star$
is a more general subset of
$\mathcal{A}$
and/or when the local dependence structure of the analyzed process does not factorize according to the additive structure as in (1.4)–(1.5).
The exemptions available in the literature have been analyzed separately and address specific cases; see, e.g., [Reference Adler and Brown1, Reference Chan and Lai9–Reference Dębicki, Hashorva and Ji11, Reference Liu26, Reference Piterbarg and Prisyazhnyuk33]. We would like to highlight a significant recent contribution by Piterbarg [Reference Piterbarg32], which focuses on the analysis of high excursion probabilities for centered Gaussian fields defined on a finite-dimensional manifold, where
$\mathcal{M}^\star$
is a smooth submanifold. In this intuitively presented work, under the assumption that the correlation function of X is locally homogeneous, three scenarios for
$\mathcal{M}^\star\varsubsetneq \mathcal{A}$
are examined: (i) the stationary-like case, (ii) the transition case, and (iii) the Talagrand case. Under the notation in (1.4)–(1.5), these scenarios correspond to
$\Lambda_2=\Lambda_3=\emptyset$
for (i),
$\Lambda_1=\Lambda_3=\emptyset$
for (ii), and
$\Lambda_1=\Lambda_2=\emptyset$
for (iii).
The primary finding of this contribution, presented in Theorem 2.1, gives a unified result that provides the exact asymptotic behavior of (1.2) for a certain class of centered Gaussian fields for which
$\mathcal{M}^\star$
is a
$k_0\le n$
dimensional bounded Jordan set and the dependence structure of the entire field in the vicinity of
$\mathcal{M}^\star$
does not necessarily follow the decompositions outlined in (1.4)–(1.5). In contrast to [Reference Piterbarg32], we allow mixed scenarios where all sets
$\Lambda_1$
,
$\Lambda_2$
, and
$\Lambda_3$
can be nonempty simultaneously. Furthermore, we examine more general local structures of the correlation function than those presented in (1.4). More specifically, we relax the assumption that the correlation function is locally stationary for coordinates in
$\Lambda_2, \Lambda_3$
by replacing
$h_j(\overline{\boldsymbol{s}}_j - \bar{\boldsymbol{t}}_j)$
with
$\tilde{h}_j(\overline{\boldsymbol{s}}_j, \bar{\boldsymbol{t}}_j)$
in (1.4). As the main technical challenge of this contribution, this generalization is particularly important for the examples discussed in Sections 3.1 and 3.2.
In Section 3 we present two examples that demonstrate the applicability of Theorem 2.1. Specifically, in Section 3.1 we derive the exact asymptotics of

where

and

with
$t_0=0,t_{n+1}=1$
, constants
$a_i\in(0,1]$
and
$B^{\alpha}_{i},\ i=1,\ldots, n+1$
being mutually independent fractional Brownian motions with Hurst index
$\alpha/2\in(0,1)$
. This random variable plays an important role in many areas of probability theory, and its analysis motivates the development of the theory presented in this paper. Due to its relation with some notions based on the performance table (see Section 3.1), the random variable
$D_n^{1}$
emerges as a limit in several important quantities considered in the modeling of queues in series, totally asymmetric exclusion processes, or oriented percolation [Reference Baryshnikov6, Reference Glynn and Whitt16, Reference O’Connell29]. If
$ a_i \equiv 1 $
then
$ D_n^{1} $
has the same distribution as the largest eigenvalue of an n -dimensional Gaussian unitary ensemble (GUE) matrix [Reference Gravner, Tracy and Widom18]. If
$\alpha = 1$
but the values of
$a_i$
are not all the same, then the size of
$\mathcal{M}^\star$
depends on the number of coordinates for which
$a_i = 1$
(recall that we assume that
$a_i \leq 1$
). In this case, the correlation structure of the entire field is not locally homogeneous. Utilizing Theorem 2.1 allows us to derive the exact asymptotics of (1.6) as
$u\to\infty$
for
$\alpha\in(0,2)$
; see Proposition 3.1.
Another application of Theorem 2.1 addresses the extremes of the class of chi processes
$\chi(t),t\ge0$
, defined as

where
$X_i(t)$
,
$i=1,\ldots ,n$
are mutually independent Gaussian processes. Due to their importance in statistics, asymptotic properties of high excursions of chi processes have attracted substantial interest. We refer to the classical work by Lindgren [Reference Lindgren25] and more recent contributions [Reference Bai and Kalaj5, Reference Hashorva and Ji19, Reference Liu and Ji27, Reference Liu and Ji28, Reference Piterbarg30, Reference Piterbarg32], which address nonstationary or noncentered cases. Importantly,
$\sup\nolimits_{t\in [0,1]} \chi(t)$
can be rewritten as a supremum of some Gaussian field

However, the common assumption on the models analyzed so far is that
$X_i(t)$
are locally stationary, as in (1.4). In Section 3.2 we use Theorem 2.1 to examine the asymptotics of the probability for high exceedances of
$\chi(t)$
in a model where the covariance structure of
$X_i$
is not locally stationary; see Proposition 3.2 for more details.
The structure of the remainder of this paper is organized as follows. The concept and main steps of the proof of Theorem 2.1 are presented in Section 4. Detailed proofs of Theorem 2.1, Propositions 3.1, 3.2, and several auxiliary results can be found in the appendices.
2. Main Result
Let
$X(\boldsymbol{t}),\ \boldsymbol{t}\in \mathcal{A}$
be an n-dimensional centered Gaussian field with continuous trajectories, variance function
$\sigma^2(\boldsymbol{t})$
, and correlation function
$r(\boldsymbol{s},\boldsymbol{t})$
, where
$\mathcal{A}$
is a bounded set in
$\mathbb{R}^n$
. Suppose that the maximum of the variance function
$\sigma^2(\boldsymbol{t})$
over
$\mathcal{A}$
is attained on a Jordan subset of
$\mathcal{A}$
. Without loss of generality, let us assume that
$\max_{\boldsymbol{t}\in \mathcal{A}} \sigma^2(\boldsymbol{t})=1$
. We denote by
$\mathcal{M}^*$
the set
$\{\boldsymbol{t}\in \mathcal{A}\colon \sigma^2(\boldsymbol{t})=1 \}$
.
Throughout this paper, all the operations on vectors are meant componentwise. For instance, for any given
${\bf{x}} = (x_1,\ldots, x_n)\in\mathbb{R}^n$
and
${\bf{y}} = (y_1,\ldots, y_n)\in \mathbb{R}^n$
, we write
${\bf{x}}{\bf{y}}=(x_1y_1,\ldots, x_ny_n)$
,
$1/{\bf{x}}=(1/x_1,\dots, 1/x_n)$
for
$x_i> 0,\ i=1,\dots, n$
, and
${\bf{x}}^{{\bf{y}}}=(x_1^{y_1},\dots, x_n^{y_n})$
for
$x_i, y_i\geq 0, \ i=1,\dots, n$
. Moreover, we say that
${\bf{x}}\geq {\bf{y}}$
if
$x_i\geq y_i,\ i=1,\dots, n$
.
Suppose that the coordinates of
$\mathbb{R}^n$
are split into four disjoint sets
$\Lambda_i,\ i=0,1,2,3$
with
$k_i=\#\bigcup_{j=0}^i\Lambda_j, i=0,1,2,3$
(implying that
$1\leq k_0\leq k_1\leq k_2\leq k_3 $
with
$k_3=n$
) and

in such a way that
$\mathcal{M}^*=\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}=0, i\in \bigcup_{j=1,2,3}\Lambda_j\}$
. Let

denote the projection of
$\mathcal M^*$
onto a
$k_0$
-dimensional space. Note that
$\mathcal{M}^*=\mathcal{A}$
if
$\bigcup_{j=1,2,3}\Lambda_j=\emptyset$
. Sets
$\Lambda_1,\Lambda_2,\Lambda_3$
play roles similar to those described in the introduction (see (A2) below), while
$\Lambda_0$
is related to
$\mathcal{M}^*$
via
$\mathcal{M}$
.
Suppose that
$\mathcal{M}$
is Jordan measurable with
$v_{k_0}(\mathcal{M})\in (0,\infty)$
, where
$v_{k_0}$
denotes the Lebesgue measure on
$\mathbb{R}^{k_0}$
, and
$\{(t_1,\dots, t_n)\colon \tilde{\boldsymbol{t}}\in\mathcal{M},\ t_i\in [0,\epsilon),\ i\in \bigcup_{j=1,2,3}\Lambda_j \}\subseteq \mathcal{A}\subseteq \{(t_1,\dots, t_n)\colon \tilde{\boldsymbol{t}}\in\mathcal{M},\ t_i\in [0,\infty),\ i\in \bigcup_{j=1,2,3}\Lambda_j \}$
for some
$\varepsilon\in(0,1)$
small enough. Furthermore, we impose the following assumptions on the standard deviation and the correlation functions of X.
(A1) There exists a centered Gaussian random field
$W(\boldsymbol{t}),\ \boldsymbol{t}\in[0,\infty)^n$
with continuous sample paths and a positive continuous vector-valued function
$\boldsymbol{a}(\tilde{\boldsymbol{z}})=(a_1(\tilde{\boldsymbol{z}}),\ldots, a_n(\tilde{\boldsymbol{z}})),\ \tilde{\boldsymbol{z}}=(z_i)_{i\in\Lambda_0}\in \mathcal{M}$
satisfying

such that

where the increments of W are homogeneous if we fix both
$\bar{\boldsymbol{t}}_2$
and
$ \bar{\boldsymbol{t}}_3$
, and there exists a vector
$\boldsymbol{\alpha}=(\alpha_1,\dots, \alpha_n)$
with
$\alpha_i\in (0,2],1\leq i\leq n$
such that, for any
$u>0$
,

Moreover, there exist
$d>0$
,
$\mathcal{Q}_i>0$
,
$i=1,2$
such that, for any
$\boldsymbol{s},\boldsymbol{t}\in \mathcal{A}$
and
$|\boldsymbol{s}-\boldsymbol{t}| < d$
,

Furthermore, suppose that, for
$\boldsymbol{s},\boldsymbol{t}\in\mathcal{A}$
and
$\boldsymbol{s}\neq \boldsymbol{t}$
,

(A2) Assume that

where
$p_j(\tilde{\boldsymbol{t}}),\ \tilde{\boldsymbol{t}}\in[0,\infty)^{k_0}, j=1,2,3,$
are positive continuous functions and
$g_j(\bar{\boldsymbol{t}}_j),\bar{\boldsymbol{t}}_j\in\mathbb{R}^{k_j-k_{j-1}}, j=1,2,3$
, are continuous functions satisfying
$g_i(\bar{\boldsymbol{t}}_i)>0, \bar{\boldsymbol{t}}_j\neq \overline{\textbf{0}}_j, j=1,2,3.$
Moreover, we assume the following homogeneity property on the
$g_j$
: there exist some
$\boldsymbol{\beta}_j=(\beta_{i})_{i\in \Lambda_j}, j=1,2,3$
with
$\beta_k > 0, k\in \bigcup_{j=1,2,3}\Lambda_j, $
such that, for any
$u>0$
,

Moreover, with
$\boldsymbol{\alpha}_j=(\alpha_{i})_{i\in \Lambda_j}, j=1,2,3$
,

Assumption (A1), which includes (2.1)–(2.5), addresses the local dependence structure of the analyzed Gaussian field in a neighborhood of the set
$\mathcal{M}^*$
of points that maximize the variance of X. The function
$\boldsymbol{a}(\cdot)$
can be modified based on the location where the correlation is being tested. Property (2.3) refers to the self-similarity of
$W(\cdot)$
with respect to each coordinate. In comparison to models previously discussed in the literature, the major novelty of (A1) lies in the fact that we do not assume homogeneity of the increments of
$W(\cdot)$
with respect to the coordinates in
$\Lambda_2\cup \Lambda_3$
. It enables us to examine the dependence structures of
$X(\cdot)$
that extend beyond local stationarity. Assumption (A2), which includes (2.6)–(2.8), addresses the behavior of the variance function of
$X(\cdot)$
in the vicinity of
$\mathcal{M}^*$
. Property (2.8) straightforwardly corresponds to the three scenarios described in (1.5) in the introduction.
We next display the main result of this paper. To the end of this paper,
$\Psi(\cdot)$
denotes the tail distribution of the standard normal random variable.
Theorem 2.1. Suppose that
$X(\boldsymbol{t}),\ \boldsymbol{t}\in \mathcal{A}$
is an n-dimensional centered Gaussian random field satisfying (A1) and (A2). Then, as
$u\to\infty$
,

where

with
$\boldsymbol{a}_2(\tilde{\boldsymbol{z}})=(a_i(\tilde{\boldsymbol{z}}))_{i\in \Lambda_2}$
and

Remark 1. The result in Theorem 2.1 is also valid if some
$\Lambda_i,\ i=0,1,2,3$
are empty sets.
Next, let us consider a special case of Theorem 2.1 that focuses on the locally stationary structure of the correlation function of
$X(\cdot)$
in the neighborhood of
${\mathcal{M}^*}$
, which partially generalizes Theorems 7.1 and 8.1 of [Reference Piterbarg31]. Suppose that


These conditions, along with assumptions (A1) and (A2), lead to a natural set of models that satisfy an additive structure as in (1.4) and (1.5) and were considered by Piterbarg [Reference Piterbarg31]. We note that in [Reference Piterbarg31] the special cases of purely homogeneous fields, characterized by a constant variance function where
$\Lambda_1 = \Lambda_2 = \Lambda_3 = \emptyset$
, and fields that have a unique maximizer of the variance function (
$\Lambda_0 = \emptyset$
), are analyzed separately. In the proposition below, we allow mixed scenarios where all sets
$\Lambda_0, \Lambda_1, \Lambda_2, \Lambda_3 \neq \emptyset$
.
Let
$\Gamma(x)=\int_{0}^{\infty} s^{x-1}{\mathrm{e}}^{-s} \,{\mathrm{d}} s$
for
$x>0$
. For
$\alpha\in (0,2]$
,
$\lambda>0$
and
$b>0$
, we define Pickands and Piterbarg constants as

where
$B^{\alpha}$
is a standard fractional Brownian motion with zero mean and covariance

For properties of Pickands and Piterbarg constants, we refer the reader to [Reference Piterbarg31] and the references listed therein.
The following proposition straightforwardly follows from Theorem 2.1.
Proposition 2.1. Under the assumptions of Theorem 2.1, if (2.9)–(2.10) hold, then

where

3. Applications
In this section we illustrate our main results by applying Theorem 2.1 to two classes of Gaussian fields with nonstandard structures of their correlation function.
3.1. The performance table and the largest eigenvalue of the GUE matrix
Let

where
$t_0=0,t_{n+1}=1$
and
$B^{\alpha}_{i},\ i=1,\ldots, n+1$
are mutually independent fractional Brownian motions with Hurst index
$\alpha/2\in(0,1)$
and
$a_i>0,\ i=1,\ldots, n+1$
. We are interested in the asymptotics of

for large u, where
$\mathcal{S}_{n}=\{\boldsymbol{t}\in \mathbb{R}^n\colon 0\leq t_1\leq\cdots\leq t_n\leq 1\}$
. Without loss of generality, we assume that
$\max_{i=1,\dots, n+1} a_i=1$
.
The random variable
$D_n^\alpha$
arises in many problems that are important in both theoretical and applied probability. Specifically, it is closely related to the notion of the performance table. More precisely, following [Reference Baryshnikov6], let
$\boldsymbol{w}=(w_{ij}), i,j\geq 1$
be a family of independent random values indexed by the integer points of the first quarter of the plane. A monotonous path
$\pi$
from (i,j) to
$(i',j'), i\leq i'; j\leq j'; i,j,i',j'\in\mathbb{N}$
is a sequence
$(i,j)=(i_0,j_0), (i_1,j_1),\ldots, (i_l,j_l)=(i',j')$
of length
$k=i'+j'-i-j+1$
, such that all lattice steps
$(i_k,j_k)\rightarrow (i_{k+1},j_{k+1})$
are of size one and (consequently) go to the north or the east. The weight
$\boldsymbol{w}(\pi)$
of such a path is the sum of all entries of the array
$\boldsymbol{w}$
along the path. We define the performance table
$l(i,j), i,j \in \mathbb{N}$
as the array of largest path weights from (1, 1) to (i, j), that is,

If
$\text{Var}(w_{ij})\equiv v>0$
and
$\mathbb{E}\left\{w_{ij}\right\}\equiv e$
for all i, j, then

converges in law as
$k\rightarrow\infty$
to
$D_n^1$
with
$a_i\equiv 1$
; see [Reference Baryshnikov6]. Notably,
$D_n^1$
has a queueing interpretation, e.g. in the analysis of departure times from queues in series [Reference Glynn and Whitt16] and plays an important role in the analysis of noncolliding Brownian motions [Reference Grabiner17]. Moreover, as observed in [Reference Baryshnikov6], if
$a_i\equiv 1$
then
$D^1_n$
has the same law as the largest eigenvalue of an n-dimensional GUE random matrix; see [Reference O’Connell29].
Let

where
$\#\mathcal{N}$
denotes the cardinal number of
$\mathcal{N}$
. For
$k^*=\max\{i\in\mathcal{N}\}$
and
${\bf{x}}=(x_1,\ldots, x_{k^*-1}, x_{k^*+1},\ldots, x_{n+1})$
, we define

where
$B_i, \widetilde{B}_i$
are independent standard Brownian motions and

with the convention that
$\max\emptyset=1$
.
For
$\mathfrak{m}$
given in (3.3), let

It appears that, for
$\alpha=1$
and
$\mathfrak{m} < n+1$
, the field
$Z^1$
satisfies (A1) with W as given in (3.4). Notably, it has stationary increments with respect to the coordinates
$\mathcal{N}$
while the increments of W are not stationary with respect to the coordinates
$\mathcal{N}^c$
; see (B.11) in the proof of the following proposition. Moreover, we have
$\Lambda_0=\mathcal{N}$
,
$\Lambda_1=\emptyset$
,
$\Lambda_2=\mathcal{N}^c$
,
$\Lambda_3=\emptyset$
.
Proposition 3.1. For
$Z^{\alpha}$
defined in (3.1), we have, as
$u\rightarrow\infty$
,

where
$\sigma_*=(\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})^{{(1-\alpha)}/{2}}$
and

Remark 3.1.
-
(i) If
$1\leq \mathfrak{m}\leq n$ , then
$1 \le \mathcal{H}_W\leq n^{\mathfrak{m}-1}\prod_{i\in\mathcal{N}^c}(1+{2n}/{(1-a_i^2)}).$
-
(ii) If
$\mathfrak{m}=n+1$ , then
$\mathcal{H}_W=1$ .
To prove Proposition 3.1, we distinguish three scenarios based on the value of
$\alpha$
:
$\alpha \in (0,1)$
,
$\alpha = 1$
, and
$\alpha \in (1,2)$
. The cases of
$\alpha\in(0,1)$
and
$\alpha\in(1,2)$
can be derived from [Reference Piterbarg31, Theorem 8.2], where the maximum variance function of
$Z^1$
is attained at a finite number of points. The case where
$\alpha=1$
fundamentally differs from the abovementioned cases. This is because, depending on the values of
$a_i$
, the maximum of the variance function of
$Z^1$
is attained at a set
$\Lambda_{0}$
that has a positive Lebesgue measure of dimension
$\mathfrak{m}-1$
, with
$\mathfrak{m}$
defined in (3.3), and the corresponding correlation function is not locally stationary in the vicinity of
$\Lambda_{0}$
. We apply Theorem 2.1 in this case. The detailed proofs of Proposition 3.1 and Remark 3.1 are postponed to Appendix B and Appendix C, respectively.
3.2. Chi processes
Consider a chi process

where
$X_i(t)$
,
$i=1,\ldots ,n$
, are independent and identically distributed (i.i.d.) copies of
$\{X(t), t\in[0,1]\}$
, a centered Gaussian process with almost surely (a.s.) continuous trajectories. Suppose that

and

where
$\{Y(t),t\geq 0\}$
is a centered Gaussian process with a.s. continuous trajectories satisfying:
(B1)
$\{Y(t),t\geq 0\}$
is self-similar with index
$\alpha/2\in (0,1)$
(i.e. for all
$r>0$
,
$\{Y(rt),t\geq 0\}\buildrel d \over =\{r^{\alpha/2}Y(t),t\geq 0\},$
where
$\buildrel d \over =$
means the equality of finite dimensional distributions) and
$\sigma_Y(1)=1$
;
(B2) there exist
$c_Y>0$
and
$\gamma \in [\alpha, 2]$
such that

The class of processes that satisfy conditions (B1) and (B2) includes fractional Brownian motions, bifractional Brownian motions (see, e.g., [Reference Houdré and Villa20, Reference Lei and Nualart22]), subfractional Brownian motions (see, e.g., [Reference Bojdecki, Gorostiza and Talarczyk8, Reference Dzhaparidze and Zanten14]), dual-fractional Brownian motions (see, e.g., [Reference Li and Shao23]) and the time average of fractional Brownian motions (see, e.g., [Reference Dębicki13, Reference Li and Shao23]).
For a Gaussian process Y satisfying (B1) and (B2) and
$b>0$
, we introduce a generalized Piterbarg constant

We refer the reader to [Reference Dębicki13] for the properties of this constant.
The literature on the asymptotics of

as
$u\to\infty$
, focuses on the scenario where Y in (3.8) is a fractional Brownian motion. Then,
$1-r(s,t)\sim a|t-s|^\alpha$
as
$s,t\to 0$
for some
$\alpha \in (0,2]$
, which implies that the correlation function of X is locally homogeneous at 0; see e.g. [Reference Hashorva and Ji19, Reference Liu and Ji28, Reference Piterbarg30, Reference Piterbarg32]. In the following proposition, Y represents a general self-similar Gaussian process that satisfies conditions (B1) and (B2). This framework allows for locally nonhomogeneous structures of the correlation function of X, which have not been previously explored in the literature.
The idea of deriving the asymptotics of (3.10) is based on transforming it into the supremum of a Gaussian random field over a sphere; see [Reference Fatalov15, Reference Piterbarg30, Reference Piterbarg32]. More specifically, we use the fact that

Next, we transform the Euclidean coordinates into spherical coordinates,

where
$\boldsymbol{\theta}=(\theta_1,\dots,\theta_{n-1})$
and
$\boldsymbol{\theta}\in [0,\pi]^{n-2}\times [0,2\pi)$
. For

we have

Consequently,

Then, it appears that the Gaussian field Z satisfies the assumptions of Theorem 2.1 with W in (2.2) and (2.3) given by

where
$B_i^2$
are independent fractional Brownian motions with index 2 and Y is a self-similar Gaussian process as described in (3.8) that is independent of
$B_i^2$
. Importantly, if Y is not a fractional Brownian motion then W, as defined above, does not have stationary increments with respect to the coordinate t. Moreover,
$\Lambda_0=\{1,\dots, n-1\}$
,
$\Lambda_1=\emptyset$
,
$\Lambda_2=\{n\}$
,
$\Lambda_3=\emptyset$
. An application of Theorem 2.1 leads to the following result.
Proposition 3.2. For
$\chi$
defined in (3.6) with X satisfying (3.7) and (3.8), we have

where
$P_Y^{a^{-1}b}$
is defined in (3.9).
4. Proof of Theorem 2.1
The idea of the proof of Theorem 2.1 is based on Piterbarg’s methodology [Reference Piterbarg31] combined with some refinements developed in [Reference Dębicki, Hashorva and Liu12]. The proof is divided into three steps. In the first step, we demonstrate that the supremum of X(t) over
$\mathcal{A}$
is primarily achieved on a specific subset. In the second step, we divide this subset into smaller hyperrectangles with sizes adjusted according to u. Then, we uniformly derive the tail probability asymptotics on each hyperrectangle. This part of the proof utilizes an adapted version of Theorem 2.1 from [Reference Dębicki, Hashorva and Liu12] (see Lemma 4.1 in Section 4.1). We first scale the parameter set appropriately to ensure that the rescaled hyperrectangles are independent of u. As a result, the scaled processes, denoted by
$ X_{u,l}(\cdot) $
, depend on both u and the position of the hyperrectangle l (see (4.5) in conjunction with (4.6)). Then we apply Lemma 4.1 for
$X_{u,l}(\cdot)$
. The upper bound for the analyzed asymptotic probability is the summation of the asymptotics over the corresponding hyperrectangles. For the lower bound, we apply the Bonferroni inequality, where the additional summation of the double high exceedance probabilities of X over all pairs of the hyperrectangles is tightly bounded. Finally, the third step focuses on summing the asymptotics from the second step to obtain the overall asymptotics.
We denote by
$\mathbb{Q}$
and
$\mathbb{Q}_i$
, for
$i=1,2,3,\dots$
, positive constants that may vary from line to line.
4.1. An adapted version of Theorem 2.1 in [Reference Dębicki, Hashorva and Liu12]
In this subsection we present a modified version of Theorem 2.1 from [Reference Dębicki, Hashorva and Liu12], which is crucial for proving Theorem 2.1. Let
$X_{u,\boldsymbol{l}}(\boldsymbol{t}),\, \boldsymbol{t}\in E\subset \mathbb{R}^n, \boldsymbol{l}\in K_u \subset \mathbb{R}^m, m\geq 1$
be a family of Gaussian random fields with variance 1, where
$E\subset \mathbb{R}^n$
is a compact set containing
$\textbf{0}$
and
$K_u\neq \emptyset$
. Moreover, assume that
$g_{u,\boldsymbol{l}}, \boldsymbol{l}\in K_u$
is a series of functions over E and
$u_{\boldsymbol{l}}, \boldsymbol{l}\in K_u$
are positive functions of u satisfying
$\lim_{u\rightarrow\infty}\inf_{\boldsymbol{l}\in K_u}u_{\boldsymbol{l}}=\infty$
. To obtain the uniform asymptotics of

with respect to
$\boldsymbol{l}\in K_u$
, we impose the following assumptions.
(C1) There exists a function g such that

(C2) There exists a centered Gaussian random field
$V(\boldsymbol{t}), \boldsymbol{t}\in E$
with
$V(\textbf{0})=0$
such that

(C3) There exist
$\gamma\in (0,2]$
and
$\mathcal{C}>0$
such that, for sufficiently large u,

At the beginning of Section 4, we noted that in the proof of Theorem 2.1 we would determine the precise asymptotics of the suprema for a collection of appropriately scaled Gaussian fields
$ X_{u,l} $
. The set of assumptions (C1)–(C3) is accommodated to these scaled processes. In Section 4.2 we demonstrate that (A1) for X guarantees that (C2) and (C3) are uniformly satisfied for all
$X_{u,l}$
. In addition, (A2) ensures that (C1) holds.
Lemma 4.1. Let
$X_{u,\boldsymbol{l}}(\boldsymbol{t}), \boldsymbol{t}\in E\subset \mathbb{R}^n, \boldsymbol{l}\in K_u$
be a family of Gaussian random fields with variance 1,
$g_{u,\boldsymbol{l}}, \boldsymbol{l}\in K_u$
be functions defined on E and
$u_{\boldsymbol{l}}, \boldsymbol{l}\in K_u$
be positive constants. If (C1)–(C3) are satisfied then

where

4.2. Proof of Theorem 2.1
To simplify notation, we assume, without loss of generality, that
$\Lambda_0=\{1,\dots,k_0\}$
and
$\Lambda_i=\{k_{i-1}+1,\dots, k_i\}$
for
$i=1,2,3$
. Thus, we have
$\mathcal{M}^*=\{\boldsymbol{t}\in\mathcal{A}\colon t_{i}=0,\ i=k_0+1,\ldots, n\}$
and
$\mathcal{M}=\{\tilde{\boldsymbol{t}}\colon \boldsymbol{t}\in\mathcal{A}, t_{i}=0,\ i=k_0+1,\ldots, n\}$
. In the following, we present the proof of Theorem 2.1, postponing some tedious calculations to Appendix A.
4.2.1. Step 1
We divide
$\mathcal{A}$
into two sets, i.e.

a neighborhood of
$\mathcal{M}^*$
, which maximizes the variance of X(t) (with high probability the supremum is realized in
$E_2(u)$
) and the set
$\mathcal{A}\setminus E_2(u)$
, over which the probability associated with supremum is asymptotically negligible. For the lower bound, we only consider the process over

a neighborhood of
$\mathcal{M}^*$
.
To simplify notation, for
$\Delta_1, \Delta_2 \subseteq\mathbb{R}^{n}$
, let

For any
$u>0$
, we have

Note that, in light of [Reference Piterbarg31, Theorem 8.1], by (2.4) in assumption (A1) and (2.7) in assumption (A2), for sufficiently large u,

4.2.2. Step 2
We divide
$\mathcal{M}$
into small hypercubes such that

where

and

For fixed
${\boldsymbol{r}}$
, we analyze the supremum of X over a set related to
$\mathcal{M}_{{\boldsymbol{r}}}$
. For this, let

Moreover, define an auxiliary set

We next focus on
$\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$
and
$\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$
. The idea of the proof of this step is first to split
$E_{1,{\boldsymbol{r}}}(u)$
and
$E_{2,{\boldsymbol{r}}}(u)$
into tiny hyperrectangles and uniformly derive the tail probability asymptotics on each hyperrectangle. Then, we apply the Bonferroni inequality to demonstrate that the asymptotics over
$E_{i,{\boldsymbol{r}}}(u)$
for
$i=1,2$
are the sum of the asymptotics over the corresponding hyperrectangles, respectively.
To this end, we introduce the following notation. For some
$\lambda>0$
, let

with
$\overline{\boldsymbol{0}}_3=(0,\dots,0)\in\mathbb{R}^{n-k_2}$
and

In order to derive an upper bound for
$\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$
and a lower bound for
$\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$
, we introduce the following notation for some
$\epsilon\in(0,1)$
:

The Bonferroni inequality gives, for sufficiently large u,


where

We first derive the upper bound of
$\mathbf{P}_u\left(E_{2,{\boldsymbol{r}}}(u)\right)$
as
$u\to\infty$
. To this end, we need to find the upper bounds of
$\sum_{\boldsymbol{l}\in\mathcal{L}_{j}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right), j=2,3$
, separately.
Upper bound for
$\sum_{\boldsymbol{l}\in\mathcal{L}_{2}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)$
. By (2.6) in assumption (A2), we have, for sufficiently large u,

where

with

and

Note that by (2.7) in assumption (A2),

where

Moreover,

where

with

Hence,

Applying Lemma 4.1, we obtain

We refer to Appendix A.1 for the detailed calculations proving (4.7).
Upper bound for
$\sum_{\boldsymbol{l}\in\mathcal{L}_{3}(u)}\mathbf{P}_u\left(\mathcal{D}_{u}(\boldsymbol{l})\right)$
. We find a tight asymptotic upper bound for the second term displayed on the right-hand side of (4.4) using an approach similar to that used in deriving (4.7). For
$\lambda>1$
, we get

where
$\beta^*=\min_{i=k_1+1}^{k_2}(\beta_i).$
The detailed derivation of inequality (4.8) can be found in Appendix A.2.
Upper bound for
$\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$
. The combination of (4.7) and (4.8) yields, for
$\lambda>1$
and
$u\rightarrow\infty$
,

Next, we find a lower bound for
$\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$
as
$u\to\infty$
. To do this, we need to derive a lower bound for
$\sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l})\right)$
and upper bounds for
$\Gamma_i(u)$
, where
$i=1,2$
.
Lower bound for
$\sum_{\boldsymbol{l}\in\mathcal{L}_{1}(u)}\mathbf{P}_u\left(\mathcal{C}_u(\boldsymbol{l})\right)$
. Analogously to (4.7), we derive, as
$u\rightarrow\infty, \epsilon\rightarrow 0$
,

Upper bound for
$\Gamma_i(u),\ i=1,2$
. Applying an approach analogous to that of the proof of Theorem 8.2 in [Reference Piterbarg31], we have, for
$\lambda>1$
, as
$u\to\infty$
,


where
$\alpha^*=\max(\alpha_1,\dots, \alpha_{k_1})$
and
$\mathbb{Q}_i,\ i=4,5,6$
are some positive constants.
Lower bound for
$\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$
. Inserting (4.10), (4.11), and (4.12) into (4.3), we obtain, for
$\lambda>1$
, as
$u\rightarrow\infty$
,

4.2.3. Step 3
In this step of the proof, we sum up the asymptotics derived in step 2. Set

Letting
$\lambda\rightarrow \infty$
in (4.9) and (4.13), it follows that

We sum
$\mathbf{P}_u(E_{1,{\boldsymbol{r}}}(u))$
(and
$\mathbf{P}_u(E_{2,{\boldsymbol{r}}}(u))$
) with respect to
${\boldsymbol{r}}$
to obtain a lower bound for
$\mathbf{P}_u(E_1(u))$
(and an upper bound for
$\mathbf{P}_u(E_2(u))$
). Observe that

By applying (4.14) and demonstrating that the double-sum term in (4.15) is asymptotically negligible, we obtain

and

as
$v\rightarrow 0$
. The detailed derivation of (4.16) and (4.17) is delegated to Appendix A.3.
The proof is completed by combining (4.16) and (4.17) with (4.1) and (4.2).
Appendix A. Complementary derivations for the proof of Theorem 2.1
In this section we provide detailed derivations of (4.7), (4.8), (4.16), and (4.17), and we prove the positivity of
$\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}$
.
A.1. Proof of (4.7)
We begin with aligning the notation used in Lemma 4.1 with that used in Theorem 2.1. Let
$X_{u,\boldsymbol{l}}$
be as in (4.5), and let

We note that
$\lim_{u\rightarrow\infty}\inf_{\boldsymbol{l}\in \mathcal{L}_2(u)}u_{\boldsymbol{l}_1}^{-\epsilon}=\infty$
, which combined with continuity of
$g_2$
implies that

Therefore, (C1) holds with
$g(\bar{\boldsymbol{t}})=(1-\epsilon)p_{2,{\boldsymbol{r}}}^-g_{2,{\boldsymbol{r}}}^-(\bar{\boldsymbol{t}}_2)$
. By (2.2) and (2.3) in assumption (A1), using the homogeneity of the increments of W for fixed
$\bar{\boldsymbol{t}}_2$
and
$ \bar{\boldsymbol{t}}_3$
, we have

Hence, (C2) is satisfied with the limiting stochastic process W defined in (A1). Assumption (C3) follows directly from (2.4) in assumption (A1). Therefore, we conclude that

where

Therefore, we have, as
$u\rightarrow\infty$
,

Note that

and by the dominated convergence theorem, it follows that

Hence, letting
$\epsilon\rightarrow 0$
in (A.2), we have

where

A.2. Proof of (4.8)
For sufficiently large u,

where


Let
$Z_u(\boldsymbol{t})$
be a homogeneous Gaussian random field with variance 1 and the correlation function satisfying

According to (2.4), under assumption (A1) and applying Slepian’s inequality (see [Reference Adler and Taylor2, Theorem 2.2.1]), we find that, for sufficiently large u,

Similarly as in the proof of (A.1), we have

where

Hence, using the above asymptotics and (2.7) in assumption (A2),

Moreover, the direct calculation shows that

Given the assumption (2.7) and the fact that
$\boldsymbol{\alpha}_2 = \boldsymbol{\beta}_2$
, we find that, for
$\lambda > 1$
,

where
$\beta^*=\min_{i=k_1+1}^{k_2}(\beta_i).$
In addition,

and, for
$\lambda>1$
,

Thus, for
$\lambda>1$
,

A.3. Proof of (4.16) and (4.17)
Note that
$g_{2,{\boldsymbol{r}}}^+(\bar{\boldsymbol{t}}_2)\in \mathcal{G}, {\boldsymbol{r}}\in V^+$
and
$p_2(\tilde{\boldsymbol{z}})g_2(\boldsymbol{a}_2^{-1}(\tilde{\boldsymbol{z}})\bar{\boldsymbol{t}}_2)\in \mathcal{G},\tilde{\boldsymbol{z}}\in \mathcal{M}$
with fixed c and
$\boldsymbol{\beta}_{2}$
. Thus, (A.11) implies that, for any
$\epsilon>0$
, there exists
$\lambda_0>0$
such that, for any
$\lambda>\lambda_0>0$
and
${\boldsymbol{r}}\in V^+$
and
$\tilde{\boldsymbol{z}}\in\mathcal{M}$
,

Hence, it follows that, as
$u\rightarrow\infty$
and
$\lambda>\lambda_0$
,

Note that, for any fixed
$\tilde{\boldsymbol{z}}\in\mathcal{M}^o$
, where
$\mathcal{M}^o\subset \mathcal{M}$
is the interior of
$\mathcal{M}$
,

Moreover, it is clear that there exists
$\mathbb{Q}<\infty$
such that, for any
$\lambda>1$
and
$v>0$
,

Consequently, the dominated convergence theorem gives

Next, we focus on the double-sum term in (4.15). For
${\boldsymbol{r}}\in V^-, {\boldsymbol{r}}'\in V^-, M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}=\emptyset$
, we have

By (2.5) in assumption (A1), there exists
$0<\delta<1$
such that, for all
${\boldsymbol{r}}\in V^-, {\boldsymbol{r}}'\in V^-, M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}=\emptyset$
,

According to the Borell-TIS inequality (see, for example, [Reference Adler and Taylor2, Theorem 2.1.1]), for
$u>a$
, we have

where
$a={\mathbb{E}(\!\sup\nolimits_{\boldsymbol{s}\in \mathcal{A}, \boldsymbol{t}\in \mathcal{A}} X(\boldsymbol{s})+X(\boldsymbol{t}))}/{2}=\mathbb{E}(\sup\nolimits_{\boldsymbol{t}\in \mathcal{A}}X(\boldsymbol{t}))$
. Consequently,

For
${\boldsymbol{r}}, {\boldsymbol{r}}'\in V^-, {\boldsymbol{r}}\neq {\boldsymbol{r}}', M_{{\boldsymbol{r}}}\cap M_{{\boldsymbol{r}}'}\neq\emptyset$
,

In light of (A.7) and (A.8), we have

Therefore, we have

implying that

Similarly, we can obtain, as
$v\rightarrow 0$
,

A.4. Existence of
$\mathcal{H}_W^{g_{2}(\bar{\boldsymbol{t}}_2)}$
We follow a similar idea as that used in the proof of Lemmas 7.1 and 8.3 in [Reference Piterbarg31]. Thus, we present only the main steps of the argument. We assume that

Dividing (4.9) and (4.13) by
$v^{k_0}\Theta^-(u)$
and letting
$u\rightarrow\infty$
, we derive that

The positivity of the above limit follows from the same arguments as in [Reference Piterbarg31]. Therefore,

Moreover, using (4.9) and (4.13), we have, for
$\lambda>1$
,

Let
$\mathcal{G}\,:\!=\, \{g_2: \text{g}_2$
$ \mathrm{is continuous}, u g_2(\bar{\boldsymbol{t}}_2)=g_2(u^{1/{ \boldsymbol{\beta}_{2}}}\bar{\boldsymbol{t}}_{2}), u>0, \inf_{\sum_{i=k_{1}+1}^{k_2}|t_i|^{\beta_i}=1}g_2(\bar{\boldsymbol{t}}_2)>c>0\},$
where c and
$\boldsymbol{\beta}_{2}$
are fixed. For any
$g_2\in\mathcal{G}$
, (4.7) and (4.8)–(4.13) are still valid. Hence, (A.10) also holds. This implies that, for any
$\lambda>1$
,

Appendix B. Proof of Proposition 3.1
For
$Z^{\alpha}(\boldsymbol{t})$
introduced in (3.1), we write
$\sigma_Z^2$
for the variance of
$Z^{\alpha}$
and
$r_Z$
for its correlation function. Moreover, let
$\sigma_*= \max_{\boldsymbol{t}\in \mathcal{S}_{n}} \sigma_Z(\boldsymbol{t})$
and recall that
$\mathcal{S}_{n}=\{0=t_0\leq t_1\leq\cdots\leq t_n\leq t_{n+1}=1\}$
. The expansions of
$\sigma_Z$
and
$r_Z$
are displayed in the following lemma, which is crucial for the proof of Proposition 3.1. We skip its proof as it only needs some standard but tedious calculations.
Lemma B.1. (i) For
$\alpha \in (0,1)$
, the standard deviation
$\sigma_Z$
attains its maximum on
$\mathcal{S}_{n}$
at only one point
$\boldsymbol{z}_0= (z_1,\ldots, z_{n})\in \mathcal{S}_{n}$
with
$z_i={\sum_{j=1}^ia_j^{{2}/{(1-\alpha)}}}/{\sum_{j=1}^{n+1}a_j^{{2}/{(1-\alpha)}}},\ i=1,\ldots, n,$
and its maximum value is
$\sigma_*=(\sum_{i=1}^{n+1}a_i^{{2}/{(1-\alpha)}})^{{(1-\alpha)}/{2}}.$
Moreover,

with
$z_0\,:\!=\,0, z_{n+1}\,:\!=\,1$
, and

(ii) For
$\alpha=1$
and
$\mathfrak{m}$
defined in (3.3), if
$\mathfrak{m}=n+1$
,
$\sigma_Z(\boldsymbol{t})\equiv1,\ \boldsymbol{t} \in \mathcal{S}_{n}$
, and if
$\mathfrak{m} < n+1$
, function
$\sigma_Z$
attains its maximum equal to 1 on
$\mathcal{S}_{n}$
at
$\mathcal{M}=\{\boldsymbol{t}\in\mathcal{S}_{n}\colon \sum_{j\in \mathcal{N}}\lvert t_j-t_{j-1} \rvert=1\}$
and satisfies

In addition, for
$1\leq \mathfrak{m}\leq n+1$
, we have

(iii) For
$\alpha\in(1,2)$
, function
$\sigma_Z $
attains it maximum on
$\mathcal{S}_{n}$
at
$\mathfrak{m}$
points
$\boldsymbol{z}^{(j)},\ j\in\mathcal{N}=\{i\colon a_i=1,\ i=1,\ldots, n+1\}$
, where
$\boldsymbol{z}^{(j)}=(0,\ldots, 0, 1, 1,\ldots, 1)$
(the first 1 stands at the jth coordinate) if
$j\in\mathcal{N}$
and
$j<n+1$
, and
$\boldsymbol{z}^{(n+1)}=(0,\ldots, 0)$
if
$n+1\in\mathcal{N}$
. We further have
$\sigma_*=1$
and, as
$\boldsymbol{t} \to \boldsymbol{z}^{(j)}$
,

Case 1:
$\alpha\in(0,1)$
. From Lemma B.1(i), it follows that
$\sigma_Z$
on
$\mathcal{S}_{n}$
attains its maximum
$\sigma_*$
at the unique point
$\boldsymbol{z}_0= (z_1,\ldots, z_{n})$
with

Moreover, from (B.1) we have, for
$\boldsymbol{t}\in\mathcal{S}_{n}$
,

as
$\left\lvert \boldsymbol{t}-\boldsymbol{z}_0 \right\rvert\rightarrow 0$
and from (B.2), for
$\boldsymbol{t}, \boldsymbol{s}\in\mathcal{S}_{n}$
,

as
$\left\lvert \boldsymbol{s}-\boldsymbol{z}_0 \right\rvert,\left\lvert \boldsymbol{t}-\boldsymbol{z}_0 \right\rvert \rightarrow 0$
. Furthermore, we have

Therefore, by [Reference Piterbarg31, Theorem 8.2] we obtain, as
$u\rightarrow\infty$
,

where

A direct calculation demonstrates that

This completes the proof of this case.
Case 2:
$\alpha=1$
. First, we consider the case
$\mathfrak{m} < n+1$
. Let
$k^*=\max\{i\in\mathcal{N}\}$
and denote

To facilitate our analysis, we make the transformation

which implies that
${\bf{x}}=(x_1,\ldots, x_{k^*-1},x_{k^*+1},\ldots, x_{n+1})\in[0,1]^n$
and

with the convention that
$\max\emptyset=0$
. Define
$Y({\bf{x}})=Z(\boldsymbol{t}({\bf{x}}))$
and
$\widetilde{\mathcal{S}}_n=\{{\bf{x}}\colon \boldsymbol{t}({\bf{x}})\in\mathcal{S}_n\}$
, with
$\boldsymbol{t}({\bf{x}})$
given in (B.7). By Lemma B.1(ii) it follows that
$\sigma_{Y}({\bf{x}})$
, the standard deviation of
$Y({\bf{x}})$
, attains its maximum equal to 1 at

Moreover, let
$\widetilde{{\bf{x}}}=(x_i)_{i\in \mathcal{N}_0 }$
,
$\bar{{\bf{x}}}=(x_i)_{i\in\mathcal{N}^c}$
and denote, for any
$\delta\in (0, {1}/{(n+1)^2})$
,

We note that

and

By applying Theorem 2.1, we derive the asymptotics of
$\mathbb{P} \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \} $
as
$u\to\infty$
. Subsequently, we demonstrate that the other two terms in (B.9) are asymptotically negligible. We begin with finding the asymptotics of
$\mathbb{P} \{\sup\nolimits_{{\bf{x}}\in\widetilde{\mathcal{S}}_n(\delta)}Y({\bf{x}})>u \} $
. First, observe

which is a set satisfying the assumption in Theorem 2.1. Moreover, it follows from (B.3) that

Taking
$\tilde{\boldsymbol{t}}=\widetilde{{\bf{x}}}$
and
$\bar{\boldsymbol{t}}_2=\bar{{\bf{x}}}$
in Theorem 2.1, (B.10) implies that (A2) holds with
$g_2(\bar{{\bf{x}}})=\tfrac{1}{2}\sum_{i\in\mathcal{N}^c}(1-a_i^2)x_i$
and
$p_2(\widetilde{{\bf{x}}})=1$
for
$\widetilde{{\bf{x}}}\in\widetilde{\mathcal{S}}^*_n(\delta)$
. We note that
$\Lambda_1=\Lambda_3=\emptyset$
in this case.
We next check assumption (A1). To compute the correlation structure, we note that, for
${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$
and
$\lvert {\bf{x}}-{\bf{y}} \rvert<{\delta}/{(n+1)^2}$
, if
$i\in\mathcal{N}_0$
then

and

while, if
$i=k^*$
then we have

and, for
$ k^*-1\in\mathcal{N}_0$
,

and, for
$ k^*-1\in\mathcal{N}^c$
,

Hence, for
$r_Y({\bf{x}},{\bf{y}})$
, the correlation function of
$Y({\bf{x}})$
, we derive from Lemma 2(ii) that, for
${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$
and
$\left\lvert {\bf{x}}-{\bf{y}} \right\rvert<{\delta}/{(n+1)^2}$
, as
$\delta\rightarrow 0$
,

By (B.7), we have, for any
$i=1,\ldots,n+1$
,

Then, for
${\bf{x}},{\bf{y}}\in \widetilde{\mathcal{S}}_n(\delta)$
and
$|{\bf{x}}-{\bf{y}}|<{\delta}/{(n+1)^2}$
with
$\delta>0$
sufficiently small,

implying that (2.4) holds.
Recall that

where
$B_i, \widetilde{B}_i$
are i.i.d. standard Brownian motions and

Direct calculation gives us that
$\mathbb{E}\{(W({\bf{x}})-W({\bf{y}}))^2\}$
coincides with (B.11) for any
${\bf{x}},{\bf{y}}\in[0,\infty)^n$
. This implies that (2.2) holds with W given in (B.12) and
$\boldsymbol{a}(\tilde{x})\equiv 1$
for
$\tilde{x}\in\widetilde{\mathcal{M}}(\delta)$
.
Using (B.11) and the fact that, for any
$i=1,\ldots,n$
,
$s_i({\bf{x}})-s_i({\bf{y}})$
is the absolute value of the combination of
$x_j-y_j, \ j\in\{1,\ldots,k^*-1,k^*+1,\ldots, n+1\}$
, we derive that, for a fixed
$\bar{{\bf{x}}}$
, the increments of
$W({\bf{x}})=W(\widetilde{{\bf{x}}},\bar{{\bf{x}}})$
are homogeneous with respect to
$\widetilde{{\bf{x}}}$
. In addition, it is easy to check that (2.5) also holds. Hence, (A1) is satisfied.
Consequently, by Theorem 2.1, as
$u\rightarrow\infty$
, we have

where

We now proceed to the negligibility of the other two terms in (B.9). In light of the Borell-TIS inequality, we have, as
$u\to\infty$
,

where
$\varepsilon=1-\sup\nolimits_{{\bf{x}}\in \widetilde{\mathcal{S}}_n\setminus\widetilde{\mathcal{S}}^*_n(\delta)}\sigma_Y({\bf{x}}).$
By Slepian’s inequality and Theorem 2.1, we have

A combination of the fact that

with (B.8), (B.9), and (B.13)–(B.15) leads to

Case
$\mathfrak{m}=n+1$
: for some small
$\varepsilon\in(0,1)$
, define
$E(\varepsilon)=\{\boldsymbol{t}\in \mathcal{S}_{n}\colon t_i-t_{i-1}\geq \varepsilon,\ i=1,\ldots, n+1\}$
. Thus, we have

Let us first derive the asymptotics of Z over
$E(\varepsilon)$
. For
$\boldsymbol{s}, \boldsymbol{t}\in E(\varepsilon)$
, by (B.4) we have

Moreover, it follows straightforwardly that
${\mathrm{Var}}(Z(\boldsymbol{t}))=1$
for
$\boldsymbol{t}\in E(\varepsilon)$
and
$\ \mathrm{corr}(Z(\boldsymbol{t}), Z(\boldsymbol{s}))<1$
for any
$\boldsymbol{s}\neq \boldsymbol{t}$
and
$\boldsymbol{s},\boldsymbol{t}\in E(\varepsilon)$
. Hence, by [Reference Piterbarg31, Lemma 7.1] we have

Moreover, by Slepian’s inequality and [Reference Piterbarg31, Lemma 7.1], as
$u\rightarrow\infty, \varepsilon\rightarrow 0$
,

Inserting (B.17) and (B.18) into (B.16), we obtain

The claim is established by Remark 3.1(ii).
Case 3:
$\alpha\in(1,2)$
. For
$\boldsymbol{s},\boldsymbol{t}\in\mathcal{S}_{n}$
, one can easily check that

if
$\boldsymbol{s}\neq\boldsymbol{t}$
. In light of Lemma 2(iii),
$\sigma_Z$
attains its maximum at
$\mathfrak{m}$
distinct points
$\boldsymbol{z}^{(j)},j\in\mathcal{N}$
. Consequently, by [Reference Piterbarg31, Corollary 8.2], we have

where
$\Pi_{\delta, j}=\left\{\boldsymbol{t}\in\mathcal{S}_{n}\colon \lvert \boldsymbol{t}-\boldsymbol{z}^{(j)} \right\rvert\leq \tfrac{1}{3}\}.$
Define
$E_j(u)\,:\!=\,\{\boldsymbol{t}\in\Pi_{\delta, j}\colon 1-({\ln u}/{u})^2\leq t_j-t_{j-1}\leq 1\}\ni \boldsymbol{z}^j$
. Observe that

We first find the exact asymptotics of
$\mathbb{P} \{\sup\nolimits_{\boldsymbol{t}\in E_j(u)}Z^\alpha(\boldsymbol{t})>u \} $
as
$u\to\infty$
. Clearly, for any
$u\in\mathbb{R}$
,

Moreover, for
$\boldsymbol{s},\boldsymbol{t}\in \mathcal{S}_{n} $
, there exists a constant
$c>0$
such that
$\inf_{\boldsymbol{t}\in\mathcal{S}_{n}}\sigma_Z(\boldsymbol{t})\geq {1}/{\sqrt{2c}}$
. Hence, in light of (B.6) we have

Let
$U_2(\boldsymbol{t}), \boldsymbol{t}\in \mathbb{R}^n$
be a centered homogeneous Gaussian field with continuous trajectories, unit variance, and the correlation function
$r_{U_2}(\boldsymbol{s},\boldsymbol{t})$
satisfying

Set
$\widetilde{E}_j(u)=[0,\varepsilon_1 u^{-2/\alpha}]^{j-1}\times[1-\varepsilon_1 u^{-2/\alpha},1]^{n-j+1}$
for some constant
$\varepsilon_1\in(0,1)$
. Then it follows that
$E_j(u)\subset \widetilde{E}_j(u)$
for sufficiently large u. By Slepian’s inequality and [Reference Piterbarg31, Lemma 6.1],

as
$u\rightarrow\infty, \varepsilon_1\rightarrow 0$
, where

Consequently,

Note that, for
$\boldsymbol{t}\in\mathcal{S}_{n}$
,

Hence, by (B.5), for sufficiently large u,

where
$\varepsilon\in(0,1)$
is a constant. In light of (B.19) and (B.21), by [Reference Piterbarg31, Theorem 8.1] we have, for sufficiently large u,

which combined with (B.20) leads to

Consequently, with
$\mathfrak{m}=\#\mathcal{N}$
given in (3.3), we obtain

This completes the proof.
Appendix C. Proof of Remark 3.1
(i) For the
$1\leq \mathfrak{m}\leq n$
case, we first show that
$\mathcal{H}_W\geq 1$
. Recall that
$\mathcal{N}_0=\{i\in\mathcal{N}, i < k^*\}$
,
$\mathcal{N}^c=\{i\colon a_i<1,\ i=1,\ldots, n+1\}$
and
$\widetilde{{\bf{x}}}=(x_i)_{i\in \mathcal{N}_0 }$
.
For
$x_i=0, i\in \mathcal{N}^c$
, by the definition of W in (3.4), we have

Hence,

where
$H_{B_i}$
is defined in (2.11). Note that
$\mathcal{H}_{B_i}=1$
; see e.g. [Reference Piterbarg31] (or [Reference Long, Dȩbicki, Hashorva and Luo4]). Therefore,
$\mathcal{H}_W\geq 1.$
We next derive the upper bound of
$\mathcal{H}_W$
for
$1\leq \mathfrak{m}\leq n$
. We use the notation introduced in the proof of Proposition 3.1(ii) (specifically, Y and
$\widetilde{\mathcal{S}}_n(\delta)$
). For
$\delta\in (0, {1}/{(n+1)^2})$
, let

where
$B(\delta)=\prod_{i=1}^{m-1}[2i\delta, (2i+1)\delta]$
. Clearly,
$A(\delta)\subset\widetilde{\mathcal{S}}_n(\delta)$
. Moreover, by (B.11) it follows that, for any
$\epsilon>0$
, there exists
$\delta \in (0, {1}/{(n+1)^2})$
such that, for any
${\bf{x}},{\bf{y}}\in A(\delta)$
,

Let us introduce a centered homogeneous Gaussian field
$U_4({\bf{x}})$
,
${\bf{x}}\in[0,\infty)^{n}$
with continuous trajectories, unit variance, and the correlation function

where
$B_i,\ i=1,\dots, k^*-1, k^*+1, n+1$
are i.i.d. standard Brownian motions. By (B.10) and Slepian’s inequality, we have, for
$0<\epsilon<1$
,

Analogously to (B.13), we have

and

where

with
$\mathcal{P}_{B_i}^c$
for
$c>0$
being defined in (2.11). Using the fact that
$\mathcal{H}_{B_i}=1$
and, for
$c>0$
,
$\mathcal{P}_{B_i}^c=1+{1}/{c}$
(see, e.g., [Reference Long, Dȩbicki, Hashorva and Luo4]), we have

Hence,

We establish the claim by letting
$\epsilon\to 0$
.
(ii) If
$\mathfrak{m}=n+1$
, we have
$\mathcal{N}_0=\{1,\dots, n\}$
and

This completes the proof.
Appendix D. Proof of Proposition 3.2
Let us recall that by (3.12)

with
$Z(\boldsymbol{\theta},t)$
defined in (3.11).
Observe that, for
$0<\epsilon<\pi/4$
,

where

In the rest of the proof, we apply Theorem 2.1 to obtain the asymptotics over
$E_{1,\epsilon}$
. Then, using the Borell-TIS inequality and Slepian’s inequality respectively, we find tight upper bounds of the exceedance probabilities over
$E_{2,\epsilon}$
and
$E_{3,\epsilon}$
. Finally, we combine all the obtained results to show the asymptotics over the whole set.
The asymptotics over
$E_{1,\epsilon}$
. To this end, we analyze the variance and correlation of Z. By (3.7), we have

Hence,
$\sigma_Z(\boldsymbol{\theta},t)$
attains its maximum equal to 1 at
$[0,\pi]^{n-2}\times[0,2\pi)\times\{0\}$
and

This implies that assumption (A2) is satisfied. For assumption (A1), by (3.8), we have

as
$(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E$
and
$|t-t'|, |\boldsymbol{\theta}-\boldsymbol{\theta}'|\to 0$
. Let

where
$B_i^2$
are independent fractional Brownian motions with index 2 and Y is a self-similar Gaussian process, as defined in (3.8), that is independent of
$B_i^2$
. Moreover, let
$\boldsymbol{a}(\boldsymbol{\varphi})=(a_1(\boldsymbol{\varphi}),\dots, a_{n-1}(\boldsymbol{\varphi})),\boldsymbol{\varphi}\in[0,\pi]^{n-2}\times [0,2\pi)$
with

It follows that, for
$0<\epsilon<\pi/4$
,

By the fact that

we know that
$W(\boldsymbol{\theta},t)$
is homogeneous with respect to
$\boldsymbol{\theta}$
if t is fixed. This implies that (2.2) holds with W defined in (D.3).
Moreover, by self-similarity of Y and (D.4) we have

showing that (2.3) holds with
$\alpha_i=2, i=1,\dots, n-1$
, and
$\alpha_n=\alpha$
. In addition, by (B1) and (B2), there exists
$d>0$
such that, for
$|\boldsymbol{\theta},t)-(\boldsymbol{\theta}',t')|<\delta$
with
$(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E_{1,\epsilon}$
,

Hence, (2.4) is confirmed. Moreover, (2.5) is clearly satisfied over
$E_{1,\epsilon}$
. Therefore, (A1) is verified for Z over
$E_{1,\epsilon}$
. Note that, for Z over
$E_{1,\epsilon}$
, we are in the case of
$\Lambda_0=\{1,\dots, n-1\}$
,
$\Lambda_1=\emptyset$
,
$\Lambda_2=\{n\}$
, and
$\Lambda_3=\emptyset$
of Theorem 2.1. Consequently, it follows from Theorem 2.1 that, as
$u\to\infty$
,

where W is given in (D.3).
Upper bound for the asymptotics over
$E_{2,\epsilon}$
. By (D.2), there exists
$0<\delta<1$
such that

It follows from the Borell-TIS inequality that, as
$u\to\infty$
,

Upper bound for the asymptotics over
$E_{3,\epsilon}$
. Direct calculation shows that

holds for
$(\boldsymbol{\theta},t), (\boldsymbol{\theta}',t')\in E_{3,\epsilon}$
. Define
$U_3(\boldsymbol{\theta},t), (\boldsymbol{\theta},t)\in \mathbb{R}^n$
to be a centered homogeneous Gaussian field with continuous trajectories, unit variance, and the correlation function
$r_{U_3}(\boldsymbol{\theta},t, \boldsymbol{\theta}',t')$
satisfying

By Slepian’s inequality and Theorem 8.2 in [Reference Piterbarg31], we have

Noting that
$\lim_{\epsilon\to 0}v_n(E_{3,\epsilon})=0$
, the combination of the above asymptotics and upper bounds leads to

By the fact that

and
$\mathcal{H}_W^{bt^{\alpha}}=\mathcal{P}_{\sqrt{a}Y}^{b}(\mathcal{H}_{B^2})^{n-1}=\mathcal{P}_{Y}^{a^{-1}b}\pi^{-(n-1)/2},$
where we used the fact that
$\mathcal{H}_{B^2}=\pi^{-1/2},$
we have

Acknowledgements
We sincerely appreciate the anonymous reviewer’s comments, which significantly improved the presentation of the results in this contribution. We also thank Enkelejd Hashorva for his stimulating comments that enhanced the content of this paper. We thank Lanpeng Ji for some useful discussions.
Funding Information
Support from SNSF Grant 200021-175752/1 is kindly acknowledged. Krzysztof Dȩbicki was partially supported by NCN Grant No. 2018/31/B/ST1/00370 (2019-2024). Long Bai is supported by the National Natural Science Foundation of China Grant No. 11901469, Natural Science Foundation of the Jiangsu Higher Education Institutions of China Grant No. 19KJB110022, and University Research Development Fund No. RDF2102071. Peng Liu is the co-corresponding author.
Competing Interests
There were no competing interests to declare which arose during the preparation or publication process of this article.