Hostname: page-component-857557d7f7-v2cwp Total loading time: 0 Render date: 2025-11-21T09:37:21.978Z Has data issue: false hasContentIssue false

THE LOCAL PROJECTION RESIDUAL BOOTSTRAP FOR AR(1) MODELS

Published online by Cambridge University Press:  19 November 2025

Amilcar Velez*
Affiliation:
Cornell University
*
Address correspondence to Amilcar Velez, Department of Economics, Cornell University, USA, e-mail: amilcare@cornell.edu
Rights & Permissions [Opens in a new window]

Abstract

This article proposes a local projection (LP) residual bootstrap method to construct confidence intervals for impulse response coefficients of AR(1) models. Our bootstrap method is based on the LP approach and involves a residual bootstrap procedure applied to AR(1) models. We present theoretical results for our bootstrap method and proposed confidence intervals. First, we prove the uniform consistency of the LP-residual bootstrap over a large class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form, and martingale difference shocks. Then, we prove the asymptotic validity of our confidence intervals over the same class of AR(1) models. Finally, we show that the LP-residual bootstrap provides asymptotic refinements for confidence intervals on a restricted class of AR(1) models relative to those required for the uniform consistency of our bootstrap.

Information

Type
ARTICLES
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1 Introduction

This article contributes to a growing literature on confidence interval construction for impulse response coefficients based on the local projection (LP) approach (Jorda, Reference Jorda2005). In this literature, the LP approach estimates an impulse response coefficient as one of the slope coefficients in a linear regression of a future outcome on current or lag-augmented covariates (Ramey, Reference Ramey2016; Nakamura and Steinsson, Reference Nakamura and Steinsson2018; Montiel Olea and Plagborg-Møller, Reference Montiel Olea and Plagborg-Møller2021). Recent theoretical results exist for the asymptotic validity of the confidence intervals constructed around the LP estimator, which hold over a large class of vector autoregressive (VAR) models (Xu, Reference Xu2023). Since these confidence intervals have small-sample coverage distortions (e.g., coverage probability is lower than expected), their bootstrap versions are recommended for practical use. However, theoretical results for these bootstrap versions are unknown, even for the AR(1) model. This article proposes a different bootstrap method to construct LP confidence intervals with theoretical guarantees for a class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form, and martingale difference shocks.

We propose an LP-residual bootstrap method to construct confidence intervals for impulse response coefficients of AR(1) models. Our bootstrap method is based on the LP approach and involves a residual bootstrap procedure applied specifically to AR(1) models.Footnote 1 Our bootstrap confidence intervals are centered at the LP estimator and use heteroskedasticity-consistent (HC) standard errors and a bootstrap critical value. Section 3 presents the details.

We rely on the asymptotic distribution theory initially developed in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) and generalized in Xu (Reference Xu2023). In their framework, a root $R_n(h)$ based on the LP approach can be defined for a given horizon h and a sample size n. Here, by a root, we refer to a real-valued function depending on the data and an impulse response coefficient. Their results guarantee the root $R_n(h)$ is asymptotically distributed as a standard normal distribution for a class of VAR models that allow for multiple unit roots and conditional heteroskedasticity of unknown form, and even at intermediate horizons, i.e., horizons h that are allowed to grow with n, e.g., $h = h_n \propto n^{\zeta }$ , $\zeta \in [0,1)$ . As a result, the root $R_n(h)$ can be used to construct a confidence interval $C_n(h, 1-\alpha )$ for an impulse response coefficient using a normal critical value (quantile of the asymptotic distribution). Furthermore, $C_n(h, 1-\alpha )$ has asymptotic coverage equal to the nominal level $1-\alpha $ uniformly over the parameter space (VAR model coefficients) and a wide range of intermediate horizons (e.g., uniform over $h \le h_n$ , where $h_n$ is any fixed sequence such that $h_n = o(n)$ ). Nevertheless, Monte Carlo simulations report that $C_n(h, 1-\alpha )$ has a lower coverage probability than expected.

We propose the LP-residual bootstrap method to approximate the distribution of the root $R_n(h)$ as an alternative to the asymptotic distribution. We use our approximation to calculate bootstrap-based critical values (see Section 3.1 for the step-by-step procedure). Specifically, we construct a confidence interval ${C_n^*(h,1-\alpha )}$ for an impulse response coefficient using the root $R_n(h)$ and a bootstrap critical value (see Section 3 for details).

Our first result proves the uniform consistency of the LP-residual bootstrap. More concretely, we demonstrate in Section 4 that the distribution of the root $R_n(h)$ can be approximated by its bootstrap version uniformly over the parameter space (e.g., $\rho \in [-1,1]$ ) and a wide range of intermediate horizons (e.g., uniform over $h \le h_n$ , where $h_n$ is any fixed sequence such that $h_n = o(n)$ ). Our result applies to a large class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form as in Gonçalves and Kilian (Reference Gonçalves and Kilian2004), which includes ARCH and GARCH shocks, and a sequence of shocks that satisfy the martingale difference assumption. To obtain this result, we prove the root $R_n(h)$ is asymptotically distributed as a standard normal distribution for sequences of AR(1) models with i.i.d. shocks (Theorem B.1). In particular, we prove that a high-level assumption (Assumption 3 in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2022) and Assumption 4 in Xu (Reference Xu2023)) necessary for the theoretical properties of $C_n(h,1-\alpha )$ can be verified for sequences of AR(1) models with i.i.d. shocks (Proposition B.1).

Our first result implies that the LP-residual bootstrap method provides asymptotically valid confidence intervals over a large class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form (e.g., GARCH shocks), and martingale difference shocks. Moreover, our confidence interval $C_n^*(h,1-\alpha )$ has an asymptotic coverage equal to the nominal level $1-\alpha $ uniformly over $\rho \in [-1,1]$ and a wide range of intermediate horizons.

Our second set of results shows that the LP-residual bootstrap provides asymptotic refinements to the confidence intervals on a more restricted class of AR(1) models (e.g., $|\rho |\le 1-a$ , where $a \in (0,1)$ , and i.i.d. shocks with positive continuous density), that is, the size of the error in coverage probability (ECP) of $C_n^*(h,1-\alpha )$ is $o(n^{-1})$ , whereas the size of the ECP of $C_n(h,1-\alpha )$ is $O(n^{-1})$ . More concretely, Theorem 5.2 shows the ECP of $C_n^*(h,1-\alpha )$ is $o(n^{-(1+\epsilon )})$ for some $\epsilon \in (0,1/2)$ . To obtain these results, we derive Edgeworth expansions for the distribution of the root $R_n(h)$ and its bootstrap version for a fixed h and $|\rho | \le 1-a$ , where $a \in (0,1)$ ; that is, the Edgeworth expansions are obtained for stationary AR(1) models and fixed horizons. An informal discussion to calculate the size of the ECP using Edgeworth expansions appears in Section 5.1, while the formal results are established in Section 5.2.

Other bootstrap methods to construct confidence intervals for the impulse response coefficients have been considered and recommended based on simulation studies in the growing literature on LP inference. Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) use a wild bootstrap procedure to generate new samples and compute critical values, but the theoretical results for their bootstrap method are unknown. Kilian and Kim (Reference Kilian and Kim2011) present a simulation study including a block-bootstrap method to construct confidence intervals based on the LP approach, but the theory of their block-bootstrap method is unknown (see Remarks 5.1 and 5.2 for alternative block-bootstrap procedures with theoretical guarantees). Recently, Lusompa (Reference Lusompa2023) proposes a block wild bootstrap method for confidence interval construction that is point-wise valid for a class of stationary data-generating processes; however, his bootstrap method is not applicable for an AR(1) model with a unit root. In contrast, we present a bootstrap method based on the LP approach with theoretical guarantees for a class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form, and martingale difference shocks.

More broadly, we contribute to the literature on confidence interval construction for impulse response coefficients. For short horizons (fixed h), the problem of confidence interval construction has been studied by Andrews (Reference Andrews1993), Hansen (Reference Hansen1999), Inoue and Kilian (Reference Inoue and Kilian2002), Jorda (Reference Jorda2005), and Mikusheva (Reference Mikusheva2007, Reference Mikusheva2015), among others. For long horizons ( $h = h_n \propto (1-b)n$ , $b \in (0,1)$ ), the problem of confidence interval construction was discussed and revised by Phillips (Reference Phillips1998), Gospodinov (Reference Gospodinov2004), Pesavento and Rossi (Reference Pesavento and Rossi2006), and Mikusheva (Reference Mikusheva2012) since the standard methods for short horizons may produce invalid confidence intervals when the data-generating process allows for unit roots. Recently, the problem of confidence interval construction for intermediate horizons ( $h_n = o\left (n\right )$ ) was addressed in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) and Xu (Reference Xu2023), which was a case not covered in the literature. In this article, we propose bootstrap confidence intervals that are asymptotically valid at short and intermediate horizons.

We also contribute to the literature on uniform inference in autoregressive models, where the confidence intervals for impulse response coefficients are uniformly valid, that is, they have an asymptotic coverage equal to the nominal level uniformly over the parameter space (e.g., uniformly over $ \rho \in [-1,1]$ for the AR(1) model). Mikusheva (Reference Mikusheva2007, Reference Mikusheva2012) shows that the grid bootstrap proposed by Hansen (Reference Hansen1999) provides confidence sets that are uniformly valid for the impulse responses when the sequence of shocks is a martingale difference sequence with constant conditional variance. However, it is unknown if the grid bootstrap is uniformly valid for AR(1) models with GARCH shocks; we report simulations for the grid bootstrap in Section 6 and Appendix E of the Supplementary Material. Inoue and Kilian (Reference Inoue and Kilian2020) show that confidence intervals based on a lag-augmented autoregressive method are uniformly valid for impulse response coefficients when the sequence of shocks is i.i.d. It is unknown if their results hold for martingale difference shocks. Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) and Xu (Reference Xu2023) show that confidence intervals based on (lag-augmented) LPs are uniformly valid for impulse response coefficients; nevertheless, Monte Carlo simulations report lower coverage probability than expected. In contrast, our bootstrap method produces confidence intervals that are uniformly valid for a larger class of martingale difference shocks with conditional heteroskedasticity of unknown form (allowing for GARCH shocks).

The remainder of the article is organized as follows. In Section 2, we describe the setup and previous results. In Section 3, we introduce our bootstrap confidence interval and the LP-residual bootstrap. In Sections 4 and 5, we study the theoretical properties of the LP-residual bootstrap: uniform consistency and asymptotic refinements. In Section 6, we investigate the numerical performance of the LP-residual bootstrap using a small simulation study. In Section 7, we describe how to implement the LP-residual bootstrap for VAR models. Finally, in Section 8, we present concluding remarks. All the proofs are presented in Appendixes A and B, and Appendixes C and D of the Supplementary Material. Additional simulation results appear in Appendix E of the Supplementary Material.

2 SETUP AND PREVIOUS RESULTS ON LOCAL PROJECTION

Consider an AR(1) model,

(1) $$ \begin{align} y_t = \rho y_{t-1} + u_t, \quad y_0 = 0, \quad \rho \in [-1,1]. \end{align} $$

Denote the impulse response coefficient at horizon $h \in \mathbf {N}$ by

(2) $$ \begin{align} \beta(\rho,h) \equiv \rho^h. \end{align} $$

An estimator for $\beta (\rho ,h)$ based on the LP approach is obtained as the slope coefficient of $y_t$ in the linear regression of $y_{t+h}$ on $y_t$ and $y_{t-1}$ ,

(3) $$ \begin{align} y_{t+h} = \hat{\beta}_n(h) y_t + \hat{\gamma}_n(h) y_{t-1} + \hat{\xi}_t(h), \quad t=1,\ldots,n-h, \end{align} $$

where $(\hat {\beta }_n(h),~\hat {\gamma }_n(h))$ and $\{\hat {\xi }_t(h): 1 \le t \le n-h\}$ are the coefficient vector and residuals of the linear regression (3), respectively. This lag-augmented LP approach was developed in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021), where they give conditions under which the coefficient $\hat {\beta }_n(h)$ consistently estimates $\beta (\rho ,h)$ . Equation (3) is a lag-augmented LP regression since the coefficient on $y_{t-1}$ is known to be zero under (1) (see Remark 2.2 for additional details on this LP approach).

Let $\hat {s}_n(h)$ be the HC standard error of $\hat {\beta }_n(h)$ in the lag-augmented LP regression (3), which can be computed as follows:

(4) $$ \begin{align} \hat{s}_n(h) \equiv \left( \sum_{t=1}^{n-h} \hat{u}_t(h)^2 \right)^{-1/2} \left(\sum_{t=1}^{n-h} \hat{\xi}_t(h)^2 \hat{u}_t(h)^2 \right)^{1/2} \left( \sum_{t=1}^{n-h} \hat{u}_t(h)^2 \right)^{-1/2}, \end{align} $$

where $\hat {u}_t(h) \equiv y_t- \hat {\rho }_n(h) y_{t-1}$ and

(5) $$ \begin{align} \hat{\rho}_n(h) \equiv \left(\sum_{t=1}^{n-h} y_{t-1}^2\right)^{-1} \left( \sum_{t=1}^{n-h} y_t y_{t-1} \right)\!. \end{align} $$

For a given $h \in \textbf {N}$ , we consider the following real-valued root for the parameter $\beta (\rho ,h)$ :

(6) $$ \begin{align} R_n(h) \equiv \frac{\hat{\beta}_n(h) - \beta(\rho,h)}{\hat{s}_n(h)}, \end{align} $$

where $\beta (\rho ,h)$ is as in (2), $\hat {\beta }_n(h)$ is computed as in (3), and $\hat {s}_n(h)$ is as in (4). We denote the distribution of the root $R_n(h)$ by

(7) $$ \begin{align} J_n(x,h,P,\rho) \equiv P_{\rho} \left( R_n(h) \le x \right)\!, \end{align} $$

where $x \in \mathbf {R}$ , $h \in \textbf {N}$ , P is the distribution of the shocks $\{u_t: t \ge 1 \}$ , $\rho \in \textbf {R}$ , and $P_{\rho }$ denote the probability distribution of the sequence $\{y_t: t \ge 1\}$ , which is defined jointly by the distribution P and the parameter $\rho $ in (1).

Let $c_n(h,1-\alpha )$ be the $1-\alpha $ quantile of $|R_n(h)|$ under the distribution $P_{\rho }$ ,

(8) $$ \begin{align} c_n(h,1-\alpha) \equiv \inf \left \{ u \in \textbf{R} : P_{\rho} \left( |R_n(h)| \le u \right) \ge 1-\alpha \right\}\!. \end{align} $$

Ideally, we would use the root $R_n(h)$ and the critical value $c_n(h,1-\alpha )$ to construct confidence sets for $\beta (\rho ,h)$ with a coverage probability of 1- $\alpha $ . That is collecting all the parameters $\beta (\rho ,h)$ such that $|R_n(h)| \le c_n(h,1-\alpha )$ , which is equivalent to defining the next confidence interval

$$ \begin{align*} \tilde{C}_n(h,1-\alpha) \equiv \left[ \hat{\beta}_n(h) - c_n(h,1-\alpha)~\hat{s}_n(h),~\hat{\beta}_n(h) + c_n(h,1-\alpha)~ \hat{s}_n(h)\right]\!. \end{align*} $$

However, the critical value $c_n(h,1-\alpha )$ is unknown since the distribution of the root is unknown in general. As a result, the confidence interval $\tilde {C}_n(h,1-\alpha ) $ is infeasible. For this reason, it is common to approximate the distribution of the root $R_n(h)$ relying on asymptotic distribution theory or bootstrap methods to approximate the infeasible $c_n(h,1-\alpha )$ .

2.1 Previous Results

The asymptotic distribution theory developed in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) and Xu (Reference Xu2023) implies that the distribution $J_n(x,h,P,\rho )$ converges to the standard normal distribution $\Phi (x)$ whenever certain assumptions on the distribution of the shocks P hold. Moreover, this convergence is uniform over the values of $\rho \in [-1,1]$ and a wide range of intermediate horizons, that is,

(9) $$ \begin{align} \sup_{|\rho|\le 1} ~ \sup_{h \le h_n} ~ \sup_{x\in \mathbf{R}} |J_n(x,h,P,\rho) - \Phi(x)| \to 0 \quad \text{as} \quad n \to \infty, \end{align} $$

where $h_n$ is any fixed sequence such that $h_n \le n$ and $h_n = o\left (n\right )$ . Assumptions 4.1 and 4.2 in Section 4 are sufficient conditions on the distribution P to obtain (9) due to Theorem 2 in Xu (Reference Xu2023).

The confidence interval for $\beta (\rho ,h)$ based on asymptotic distribution theory is defined as

(10) $$ \begin{align} C_n(h,1-\alpha) \equiv \left[ \hat{\beta}_n(h) - z_{1-\alpha/2}~\hat{s}_n(h),~\hat{\beta}_n(h) + z_{1-\alpha/2}~ \hat{s}_n(h)\right]\!, \end{align} $$

where $z_{1-\alpha /2} \equiv \Phi ^{-1}(1-\alpha /2)$ is the $1-\alpha /2$ quantile of the standard normal distribution. The result in (9) implies that the confidence interval $C_n(h,1-\alpha )$ is uniformly asymptotically valid in the sense that its asymptotic coverage probability is equal to the nominal level $1-\alpha $ uniformly over $\rho $ and a wide range of intermediate horizons h,

$$ \begin{align*} \sup_{|\rho|\le 1} ~ \sup_{h \le h_n} \left| P_{\rho} \left( \beta(\rho,h) \in C_n(h,1-\alpha) \right) - (1-\alpha) \right| \to 0 \quad \text{as} \quad n \to \infty, \end{align*} $$

where $h_n$ is any fixed sequence such that $h_n \le n$ and $h_n = o\left (n\right )$ . Three features of $C_n(h,1-\alpha )$ deserve further discussion. First, it is simpler to compute than the available alternatives in the sense that it does not require any tuning parameters. It is common to use heteroskedasticity- and autocorrelation-robust (HAR) standard errors for inference whenever we have dependent data. The major complication of HAR standard errors is the choice of the (truncation) tuning parameter (see Lazarus et al., Reference Lazarus, Lewis, Stock and Watson2018). In contrast, the HC standard errors $\hat {s}_n(h)$ defined in (4) are simple to compute and sufficient for inference under certain conditions on the distribution P (see Remark 2.1 for further explanation). Second, the uniform asymptotic validity of the confidence interval $C_n(h,1-\alpha )$ avoids pre-testing procedures about the nature of the data-generating process ( $|\rho |<1$ vs. $\rho =1$ ) that can distort inference (see Mikusheva, Reference Mikusheva2007). In particular, inference using $C_n(h,1-\alpha )$ holds regardless of the value of $\rho \in [-1,1]$ . Third, the confidence interval $C_n(h,1-\alpha )$ has theoretical guarantees at intermediate horizons (e.g., $h=h_n \propto n^{\zeta }$ , $\zeta \in (0,1)$ ). This is an important feature for inference on impulse response coefficients at intermediate horizons. Other methods to construct confidence intervals that work at short horizons (h fixed) may have problems at long and intermediate horizons (see Phillips (Reference Phillips1998), Gospodinov (Reference Gospodinov2004), Pesavento and Rossi (Reference Pesavento and Rossi2006), Mikusheva (Reference Mikusheva2012), and Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) for additional discussion).

Remark 2.1. The HC standard errors $\hat {s}_n(h)$ defined in (4) are sufficient for the construction of valid confidence intervals under certain conditions on the distribution P. In particular, as it was pointed out by Xu (Reference Xu2023), it is sufficient and necessary that the scores $\{ \xi _t(\rho ,h) u_t : 1 \le t \le n-h \}$ be serially uncorrelated, where $\xi _t(\rho ,h) \equiv \sum _{\ell =1}^{h} \rho ^{h-\ell } u_{t+\ell }$ . To explain the sufficiency of this condition, we use the derivations presented on page 1811 in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) that imply that the root $R_n(h)$ defined in (6) can be written as follows:

$$ \begin{align*}\frac{\left((n-h)^{-1/2} \sum_{t=1}^{n-h} \xi_t(\rho,h) u_t \right)}{ E\left[ \xi_t(\rho,h)^2 u_t^2\right]^{1/2}} \times \frac{ \left[(n-h)^{-1} \sum_{t=1}^{n-h} \hat{\xi}_t(h)^2 \hat{u}_t(h)^2\right]^{-1/2} }{E\left[ \xi_t(\rho,h)^2 u_t^2\right]^{-1/2}} + \varepsilon_n(\rho,h),\end{align*} $$

where $\varepsilon _n(\rho ,h)$ is a remainder error term. We derive three implications under Assumptions 4.1 and 4.2, presented in Section 4. First, the term in parentheses converges to a normal distribution with variance correctly scaled by the denominator when the scores are serially uncorrelated. This condition is guaranteed by part (ii) of Assumption 4.1. Second, the term between brackets converges in probability to its denominator due to serially uncorrelated scores. Third, the remainder error term $\varepsilon _n(\rho ,h)$ converges in probability to zero. Importantly, Xu (Reference Xu2023) proposed alternative standard errors for the construction of confidence intervals under serially correlated scores.

Remark 2.2. The lag-augmented LP regression has the purpose of making the effective regressor of interest stationary. To see this, let us use the AR(1) model in (1) to obtain $ y_{t+h} = \beta (\rho ,h) y_t + \xi _t(\rho ,h)$ , where $\xi _t(\rho ,h) = \sum _{\ell =1}^h \rho ^{h-\ell } u_{t+h}$ , which can be rewritten as

$$ \begin{align*}y_{t+h} = \beta(\rho,h) u_t + \rho \beta(\rho,h) y_{t-1} + \xi_t(\rho,h).\end{align*} $$

Based on the previous equality, an estimator for $\beta (\rho ,h)$ is defined as the slope coefficient of $u_t$ in the linear regression of $y_{t+h}$ on $u_t$ and $y_{t-1}$ . This estimator is ideal since the effective regressor is stationary (by assumption). However, this regression is unfeasible since $u_t$ is not observed. Nevertheless, the estimator can also be obtained in the lag-augmented LP regression of $y_{t+h}$ on $y_t$ and $y_{t-1}$ since $y_t$ is a linear combination of $u_t$ and $y_{t-1}$ due to (1).

3 THE LP-RESIDUAL BOOTSTRAP

This article proposes an LP-residual bootstrap for confidence interval construction. Our confidence interval for the impulse response coefficient $\beta (\rho ,h)$ is defined as

(11) $$ \begin{align} C_n^{*}(h,1-\alpha) \equiv \left[ \hat{\beta}_n(h) - c_n^*(h,1-\alpha)~\hat{s}_n(h),~\hat{\beta}_n(h) + c_n^*(h,1-\alpha)~ \hat{s}_n(h)\right]\!, \end{align} $$

where $\hat {\beta }_n(h)$ is an estimator for $\beta (\rho ,h)$ defined in (3), $\hat {s}_n(h)$ is its HC standard error defined in (4), and $c_n^*(h, 1-\alpha )$ is a bootstrap critical value defined in (15).

3.1 Bootstrap Critical Value

Let $Y^{(n)} \equiv \{y_t : 1 \le t \le n\}$ be data generated by (1). Let $c_n^*(h, 1-\alpha )$ be the bootstrap critical value involving the following steps:

  1. Step 1: Estimate $\rho $ in the AR(1) model defined in (1) with the data $Y^{(n)}$ using linear regression, denoted by

    (12) $$ \begin{align} \hat{\rho}_n \equiv \left(\sum_{t=1}^n y_{t-1}^2\right)^{-1}\left(\sum_{t=1}^n y_{t-1}y_t\right)\!, \end{align} $$

    and compute the centered residuals

    (13) $$ \begin{align} \{ \tilde{u}_t \equiv \hat{u}_t - n^{-1}\sum_{t=1}^n \hat{u}_t : 1 \le t \le n \}, \end{align} $$

    where $\hat {u}_t \equiv y_{t} - \hat {\rho }_n y_{t-1}$ .

  2. Step 2: Generate a new sample of size n using (1), (12), and (13). Define the sample as

    $$ \begin{align*} y_{b,t}^* = \hat{\rho}_n y_{b,t-1}^* + u_{b,t}^*, \quad y_{b,0}^* = 0,~ t=1,\ldots,n, \end{align*} $$

    where $\{u_{b,t}^* : 1 \le t \le n\}$ is a random sample from the empirical distribution of the centered residuals defined in (13). The new sample $\{ y_{b,t}^* : 1 \le t \le n \} $ is called the bootstrap sample.

  3. Step 3: Compute $\hat {\beta }_{b,n}^*(h)$ and $\hat {s}_{b,n}^*(h)$ as in (3) and (4) using the lag-augmented LP regression and the bootstrap sample $\{ y_{b,t}^* : 1 \le t \le n \} $ . Define the bootstrap version of the root

    (14) $$ \begin{align} R_{b,n}^*(h) = \frac{\hat{\beta}_{b,n}^*(h) - \beta(\hat{\rho}_n,h)}{\hat{s}_{b,n}^*(h)}, \end{align} $$

    where $\beta (\rho ,h)$ and $\hat {\rho }_n$ are as in (2) and (12), respectively.

  4. Step 4: Define the bootstrap critical value as the $1-\alpha $ quantile of $|R_{b,n}^*(h)|$ conditional on the data $Y^{(n)}$ , denoted by

    (15) $$ \begin{align} c_n^{*}(h, 1-\alpha) \equiv \inf \left \{ u \in \textbf{R} : P_{\rho}\left ( |R_{b,n}^*(h)| \le u \mid Y^{(n)} \right) \ge 1-\alpha \right\}\!. \end{align} $$

We named this procedure the LP-residual bootstrap due to steps 2 and 3. Step 2 generates bootstrap samples based on the estimated model and a residual bootstrap procedure. Step 3 computes the bootstrap version of the root based on the lag-augmented LP regression. To our knowledge, this bootstrap procedure is new (see Remark 3.2 and 5.1 for other bootstrap procedures involving roots based on LP estimators).

We use the bootstrap critical value $c_n^{*}(h, 1-\alpha )$ in the construction of the confidence interval defined in (11). The explicit formula in (15) has two implications. First, the bootstrap critical value $c_n^{*}(h, 1-\alpha )$ depends on the data, the sample size n, and the horizon h. Second, we can compute $c_n^{*}(h, 1-\alpha )$ with perfect accuracy whenever we use the exact empirical distribution of the centered residuals defined in (13). However, the computation of an exact distribution can be computationally demanding; therefore, it is common to approximate it using Monte Carlo procedures as we describe in Remark 3.1, which has a theoretical justification due to the Glivenko–Cantelli theorem.

Remark 3.1. It is a common practice to approximate the bootstrap critical value $c_n^{*}(h, 1-\alpha )$ using a Monte Carlo procedure (Horowitz, Reference Horowitz2001, Reference Horowitz2019). We generate B bootstrap samples of size n, where each b-th bootstrap sample ${\{y_{b,t}^*: 1 \le t \le n\}}$ is generated as in step 2. We then obtain $\{ |R_{b,n}^*(h)| : 1 \le b \le B \}$ , where each $R_{b,n}^*(h)$ is computed as in step 3. Finally, we approximate the bootstrap critical value ${c_n^{*}(h, 1-\alpha )}$ by the $1-\alpha $ quantile of $\{ |R_{b,n}^*(h)| : 1 \le b \le B\}$ , denoted by

$$ \begin{align*} c_{b,n}^*(h, 1-\alpha) \equiv \inf \left \{ u \in \boldsymbol{R} : \frac{1}{B} \sum_{b=1}^B I\left\{ |R_{b,n}^*(h)| \le u \right\} \ge 1-\alpha \right\}\!. \end{align*} $$

The accuracy of the approximation improves as the number of bootstrap samples B increases. We use $B=1,000$ in our simulation study presented in Section 6.

Remark 3.2. Another bootstrap procedure to approximate the infeasible critical value $c_n(h,1-\alpha )$ is presented in Section 5 of Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021). They use the wild bootstrap procedure described in Gonçalves and Kilian (Reference Gonçalves and Kilian2004). For this reason, we name their procedure the LP-wild bootstrap. The only difference with respect to the LP-residual bootstrap is in Step 2. The LP-wild bootstrap defines the shocks as follows: $ u_{b,t}^* = \tilde {u}_t z_{b,t}$ for all $t = 1, \ldots , n$ , where $\{\tilde {u}_t: 1 \le t \le n\}$ are the centered residuals defined in (13) and $\{z_{b,t} : 1 \le t \le n \}$ is an i.i.d. sequence of standard normal random variables independent of the data $Y^{(n)}$ . To our knowledge, the theoretical properties of the LP-wild bootstrap are unknown. We include the LP-wild bootstrap in our simulation study presented in Section 6.

Remark 3.3. An alternative to the symmetric percentile-t confidence interval defined in (11) is the equal-tailed percentile-t confidence interval. The latter is proposed and recommended in Section 5 of Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021), while the former has been found to perform better in simulations reported by Gonçalves and Kilian (Reference Gonçalves and Kilian2004). Furthermore, symmetric confidence intervals are known to perform better asymptotically in terms of coverage error in the case of i.i.d. data (see Hall, Reference Hall1992, Sect. 3.6). For these reasons, we focus on and study the properties of the symmetric percentile-t confidence interval in the next sections. Remark 4.1 presents additional discussion of the equal-tailed percentile-t confidence interval based on the LP-residual bootstrap. We include equal-tailed percentile-t confidence intervals based on both LP-residual and LP-wild bootstrap in our simulation study in Section 6.

Remark 3.4. We propose the LP-residual bootstrap method for constructing confidence intervals, aiming to provide a more accurate asymptotic approximation than the first-order asymptotic distribution for conducting inference. In Sections 4 and 5, we study the validity of this bootstrap method and its theoretical properties under assumptions on the distribution of the shocks and under the assumption of correct specification, i.e., the data are generated from the AR(1) model in (1). To our knowledge, the theoretical properties of the root $R_n(h)$ for general forms of misspecification are unknown. Recent work by Montiel Olea et al. (Reference Montiel Olea, Plagborg-Møller, Qian and Wolf2024) implies that $R_n(h)$ is still asymptotically pivotal under a specific form of local misspecification. The analysis of the theoretical properties of the LP-residual bootstrap under misspecification is outside the scope of this article.

4 UNIFORM CONSISTENCY

We show the uniform consistency of the LP-residual bootstrap (Theorem 4.1) and that our proposed bootstrap confidence interval $C_n^*(h,1-\alpha )$ defined in (11) is uniformly asymptotically valid (Theorem 4.2). In what follows, we first present and discuss the assumptions, and we then establish the results.

The following assumption imposes restrictions on the distributions of the shocks P. These assumptions are based on the general framework developed by Xu (Reference Xu2023) that generalized the work of Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021).

Assumption 4.1.

  1. i) $\{u_t: 1 \le t \le n\}$ is covariance-stationary and satisfies $E[u_t \mid \{u_s \}_{s < t}] = 0$ almost surely.

  2. ii) $E[u_t^2u_{t-s}u_{t-r}] = 0$ for all $s \neq r$ , for all $t,r,s \ge 1$ .

  3. iii) $\{u_t: 1 \le t \le n\}$ is strong mixing with mixing numbers $\{ \alpha (j): j \ge 1\}$ . There exists $\zeta> 2$ , $\epsilon>1$ , and $C_{\alpha }<\infty $ , such that $\alpha (j) \le C_{\alpha }j^{-2\zeta \epsilon /(\zeta -2)}$ , for all j.

  4. iv) For $\zeta $ defined in (iii), $E[u_t^{8\zeta }] \le C_8 < \infty $ , and $E[u_t^2 \mid \{ u_s \}_{s<t}] \ge C_\sigma $ almost surely.

Part (i) of Assumption 4.1 assumes that the shocks are a martingale difference sequence. This assumption allows for uncorrelated dependent shocks and implies that the shock $u_t$ is uncorrelated with $y_{t-1}$ . Part (ii) in Assumption 4.1 includes a large class of conditional heteroskedastic autoregressive models (e.g., ARCH and GARCH shocks), and it has been common in the literature; for instance, Gonçalves and Kilian (Reference Gonçalves and Kilian2004) use a similar assumption (Assumption A’) to prove the asymptotic consistency of the wild bootstrap for autoregressive processes. Moreover, this assumption implies that the process $\{ \xi _t(\rho ,h) u_t : 1 \le t \le n-h \}$ is serially uncorrelated, where $\xi _t(\rho ,h) \equiv \sum _{\ell =1}^{h} \rho ^{h-\ell } u_{t+\ell }$ , which is important for the use of HC standard errors as we discussed in Remark 2.1. Parts (iii) and (iv) of Assumption 4.1 are mild regularity conditions on the distribution of the shocks P to establish uniform bounds of approximation errors, which can be relaxed if stronger assumptions are imposed over the serial dependence of the shocks (see Assumption B.1 in Appendix B).

The next assumption is a high-level assumption and imposes additional restrictions on the distributions of the shocks P.

Assumption 4.2.

$$ \begin{align*} \lim_{M \to \infty} ~ \lim_{n \to \infty} ~ \inf_{ |\rho| \le 1 } ~ P_{\rho} \left(~ g(\rho,n)^{-2} ~n^{-1}\sum_{t=1}^{n}y_{t-1}^2 \ge 1/M ~\right) = 1, \end{align*} $$

where $ g(\rho ,k) = \left (\sum _{\ell =0}^{k-1} ~ \rho ^{2 \ell }\right )^{1/2}.$

This assumption implies that the estimator $\hat {\rho }_n(h)$ defined in (5) is well-behaved in the sense that its denominator after scaled by the factor $g(\rho ,n-h)$ converges to a strictly positive limit. As a result, we can replace the residual $\hat {u}_t(h) \equiv y_t - \hat {\rho }_n(h)y_{t-1}$ by the shock $u_t$ , which implies the second and third implications discussed in Remark 2.1. We show in Proposition B.1 that Assumption 4.2 can be verified if the shocks are i.i.d. and satisfied mild regularity conditions (Assumption B.1). In Appendix C of Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021), this assumption is verified for AR(1) models whenever a contiguity condition holds.

Assumptions 4.1 and 4.2 guarantee that the distribution $J_n(\cdot ,h,P,\rho )$ defined in (7) can be approximated by the standard normal distribution $\Phi (\cdot )$ uniformly on $\rho \in [-1,1]$ and a wide range of horizons h as in (9). Let $\hat {P}_n$ be the empirical distribution of the centered residuals defined in (13) and let $\hat {\rho }_n$ be the estimator of $\rho $ defined in (12). Using this notation $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ is the distribution of the bootstrap root $R_{b,n}^*(h)$ defined in (14) conditional on the data $Y^{(n)}$ . The next theorem shows that the distribution $J_n(\cdot ,h,P,\rho )$ can be approximated by the bootstrap distribution $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ uniformly on $\rho \in [-1,1]$ and a wide range of intermediate horizons (e.g., uniform over $h \le h_n$ , where $h_n$ is any fixed sequence such that $h_n = o(n)$ ), i.e., the LP-residual bootstrap is uniformly consistent.

Theorem 4.1. Suppose Assumptions 4.1 and 4.2 hold. Then, for any $\epsilon>0$ and for any sequence $h_n$ such that $h_n \le n$ and $h_n = o\left (n\right )$ , we have

(16) $$ \begin{align} \sup_{|\rho|\le 1} P_{\rho}\left( \sup_{h \le h_n} ~ \sup_{x\in \mathbf{R}} | J_n(x,h,P,\rho) - J_n(x,h,\hat{P}_n,\hat{\rho}_n) |> \epsilon \right) \to 0 \quad \text{as} \quad n \to \infty, \end{align} $$

where $J_n(x,h,\cdot ,\cdot )$ is as in (7), $\hat {P}_n$ is the empirical distribution of the centered residuals defined in (13), and $\hat {\rho }_n$ is as in (12).

Theorem 4.1 shows that the LP-residual bootstrap is uniformly consistent, i.e., the bootstrap distribution $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ approximates the distribution $J_n(\cdot ,h,P,\rho )$ uniformly over the parameter space ( $\rho \in [-1,1]$ ) and a wide range of intermediate horizons ( $h \le h_n)$ . Two features of this uniform approximation result deserve further discussion. First, uniform consistency of bootstrap methods over the parameter spaces of autoregressive models is not just a technical detail but a crucial property to guarantee reliable inference methods (see Mikusheva, Reference Mikusheva2007). Otherwise, it is possible to obtain for any sample size n a parameter $\rho _n$ such that the distance between the distributions $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ and $J_n(\cdot ,h,P,\rho )$ is far from zero. Second, the uniform approximation over the horizons is necessary for inference purposes at intermediate horizons. Other valid methods for a fixed h do not necessarily work for h growing with the sample size.

The proof of Theorem 4.1 is presented in Appendix A.1. It has two main ideas. First, we show that the approximation result presented in (9) also holds for sequences of AR(1) models with i.i.d. shocks (Theorem B.1),

$$ \begin{align*} \sup_{P \in \mathbf{P}_{n,0}} ~ \sup_{h \le h_n} ~ \sup_{|\rho| \le 1} ~ \sup_{x \in \mathbf{R}} ~ \left| J_n(x,h,P,\rho) - \Phi(x) \right| \to 0 \quad \text{as} \quad n \to \infty, \end{align*} $$

where $\mathbf {P}_{n,0}$ denotes the set of all distributions that satisfy Assumption B.1 in Appendix B.2, $h_n$ is as in Theorem 4.1, $J_n(\cdot ,h,P,\rho )$ is as in (7), and $\Phi (\cdot )$ is the standard normal distribution. Assumption B.1 imposes stronger restrictions on the dependence of the shocks (i.i.d.) and some mild regularity conditions. The formal result is presented in Appendix B.2 as Theorem B.1. Second, we show that Assumptions 4.1 and 4.2 imply the existence of a sequence of events $E_n$ with probability approaching 1 such that the empirical distributions $\hat {P}_n $ conditional on the event $E_n$ verify Assumption B.1. In other words, we show that $\hat {P}_n \in \mathbf {P}_{n,0}$ holds with a probability approaching 1. The construction of the events $E_n$ relies on Lemma B.1 in Appendix B.1. We use the previous two ideas to approximate the distribution $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ by the standard normal distribution $\Phi (\cdot )$ conditional on the event $E_n$ . Finally, we conclude that the distributions $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ and $J_n(\cdot ,h,P,\rho )$ are asymptotically close since both have the same asymptotic limit.

The next result shows that the confidence interval $C_n^*(h,1-\alpha )$ defined in (11) is uniformly asymptotically valid in the sense that its asymptotic coverage probability is equal to $1-\alpha $ uniformly over $\rho $ and a wide range of horizons h.

Theorem 4.2. Suppose Assumptions 4.1 and 4.2 hold. Then, for any sequence $h_n$ such that $h_n \le n$ and $h_n = o\left (n\right )$ , we have

(17) $$ \begin{align} \sup_{|\rho|\le 1} ~ \sup_{h \le h_n} \left| P_{\rho} \left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) - (1-\alpha) \right| \to 0 \quad \text{as} \quad n \to \infty, \end{align} $$

where $\beta (\rho ,h)$ and $C_n^*(h,1-\alpha )$ are as in (2) and (11), respectively.

Theorem 4.2 provides the theoretical justification to conduct inference on the impulse response coefficient $\beta (\rho ,h)$ using our bootstrap confidence interval $C_n^*(h,1-\alpha )$ . Note that the only difference with respect to the confidence interval $C_n(h,1-\alpha )$ defined in (10) is the critical value, which was equal to $z_{1-\alpha /2}$ . The critical value $z_{1-\alpha /2}$ was the same for different sample sizes n and horizons h. Instead, we now use a critical value $c_n^*(h,1-\alpha )$ that depends on the data, the sample size, and the horizon. We evaluate the difference in coverage probability between the confidence intervals $C_n(h,1-\alpha )$ and $C_n^*(h,1-\alpha )$ using simulations in Section 6. The simulation results provide evidence that the coverage probability of our proposed confidence interval $C_n^*(h,1-\alpha )$ is closer to $1-\alpha $ than that of $C_n(h,1-\alpha )$ .

The proof of Theorem 4.2 is presented in Appendix A.2. It only relies on the uniform consistency of the bootstrap procedure. We next sketch the main arguments of the proof. We first note that (17) is equivalent to

$$ \begin{align*}\sup_{|\rho|\le 1} ~ \sup_{h \le h_n} \left| P_{\rho} \left( |R_n(h)| \le c_n^*(h,1-\alpha) \right) - (1-\alpha) \right| \to 0 \quad \text{as} \quad n \to \infty.\end{align*} $$

We then use that the bootstrap critical value $c_n^*(h,1-\alpha )$ is included in $[z_{1-\alpha /2 -\epsilon },~z_{1-\alpha /2 +\epsilon }]$ with a probability approaching 1 for arbitrary $\epsilon>0$ (see Lemma B.3 in Appendix B.1). This result is possible because the root $R_n(h)$ is asymptotically normal and the LP-residual bootstrap is uniformly consistent. Third, we can conclude using algebra manipulation and the asymptotic normality of the root $R_n(h)$ that

$$ \begin{align*}\limsup_{n \to \infty} \sup_{|\rho|\le 1} \sup_{h \le h_n} \left| P_{\rho} \left( |R_n(h)| \le c_n^*(h,1-\alpha) \right) - (1-\alpha) \right| \le 2\epsilon,\end{align*} $$

which implies (17) since $\epsilon>0$ was arbitrary.

Remark 4.1. We can use the LP-residual bootstrap to construct equal-tailed percentile-t confidence intervals denoted by $C_{per-t,n}^{*}(h,1-\alpha )$ . That is,

(18) $$ \begin{align} C_{per-t,n}^{*}(h,1-\alpha) \equiv \left[ \hat{\beta}_n(h) - q_n^*(h,1-\alpha/2)~\hat{s}_n(h),~\hat{\beta}_n(h) - q_n^*(h,\alpha/2)~ \hat{s}_n(h)\right]\!, \end{align} $$

where $\hat {\beta }_n(h)$ is as in (3), $\hat {s}_n(h)$ is as in (4), and $q_n^*(h,\alpha _0)$ is the $\alpha _0$ -quantile of the bootstrap root $R_{b,n}^*(h)$ defined in (14). Three features of $C_{per-t,n}^{*}(h,1-\alpha )$ deserve further discussion. First, the bootstrap quantiles $q_n^*(h,\alpha _0)$ can be approximated using Monte Carlo procedures in a similar way as we discussed in Remark 3.1. Second, the confidence interval $C_{per-t,n}^{*}(h,1-\alpha )$ can be asymmetric around $\hat {\beta }_n(h)$ by construction, which is not the case of $C_n^{*}(h,1-\alpha )$ that is a symmetric one. Third, $C_{per-t,n}^{*}(h,1-\alpha )$ is uniformly asymptotically valid,

$$ \begin{align*}\sup_{|\rho|\le 1} \sup_{h \le h_n} \left| P_{\rho} \left( \beta(\rho,h) \in C_{per-t,n}^{*}(h,1-\alpha) \right) - (1-\alpha) \right| \to 0 \quad \text{as} \quad n \to \infty,\end{align*} $$

where $h_n$ is any fixed sequence such that $h_n \le n$ and $h_n = o\left (n\right )$ . The proof of this claim follows directly from Theorem 4.1, Lemma B.3, and the proof of Theorem 4.2. We include $C_{per-t,n}^{*}(h,1-\alpha )$ in our simulation study in Section 6.

Remark 4.2. For short horizons (fixed h), the available grid bootstrap (Hansen, Reference Hansen1999; Mikusheva, Reference Mikusheva2012) is a valid alternative to our bootstrap confidence interval $C_n^*(h,1-\alpha )$ when the conditional variance of the shocks is constant. The grid bootstrap is a method to construct confidence intervals for the parameter $\beta (\rho ,h)$ defined in (2) based on test inversion. Mikusheva (Reference Mikusheva2007, Reference Mikusheva2012) shows that the grid bootstrap provides confidence intervals that are uniformly asymptotically valid in the sense that its asymptotic coverage probability is equal to $1-\alpha $ uniformly on $\rho \in [-1,1]$ . Nevertheless, when the conditional variance of the shocks is not constant (e.g., GARCH shocks), it is unknown if the confidence intervals based on the grid bootstrap are valid. In contrast, $C_n^*(h,1-\alpha )$ remains valid for a larger class of AR(1) models. We include the grid bootstrap presented in Mikusheva (Reference Mikusheva2012, Sect. 3.3) in our simulation study presented in Section 6.

Remark 4.3. If we restrict our analysis to data-generating processes with weak dependence (e.g., $|\rho | \le 1-a$ for some $a \in (0,1)$ ) and consider stronger assumptions in the distribution of the shocks $\{u_t: 1 \le t \le n\}$ , then both claims in (16) and (17) can hold for long horizons (e.g., $h_n \le (1-b)n$ for some $b \in (0,1)$ ). In other words, the confidence interval $C_n^*(h,1-\alpha )$ has theoretical guarantees for long horizons under certain conditions. Assumptions 1 and 2 in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) are sufficient to guarantee this claim; a formal proof can be derived following the same strategy presented in Appendix A to prove Theorems 4.1 and 4.2. In particular, the proof of Theorem B.1 can be adapted for long horizons since $|\rho | \le 1-a$ implies that $g(\rho ,h_n)^2/(n-h_n) \to 0$ as $n \to \infty $ for any $h_n \le (1-b) n$ , where $g(\rho ,h) = \{ \sum _{\ell =1}^{h} \rho ^{2(\ell -1)} \}^{1/2}$ . This technical condition was satisfied when $|\rho | \le 1$ and $h_n = o(n)$ .

Remark 4.4. For strictly stationary data, the results in Theorems 4.1 and 4.2 can be extended to VAR models considered in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) that satisfy their Assumptions 1 and 2. A proof of these extensions may be done using the finite sample inequalities presented in their online appendix and following the approach we presented in Appendixes A and B. We leave the details of a formal proof to future research. For non-stationary data, it is an open question whether the LP-residual bootstrap is consistent for VAR models. Our approach relies on verifying Assumption 4.2 for an appropriate sequence of AR(1) models; therefore, an analogous approach may require a similar step for VAR models, which is outside the scope of this article.

Remark 4.5. We can use Theorem 4.2 to show the uniform validity of alternative methods to construct confidence intervals for $\beta (\rho ,h)$ ; however, some alternative confidence intervals can be impractical at the intermediate horizon. For instance, a confidence interval $C_{la-ar}^*(h,1-\alpha )$ for $\beta (\rho ,h)$ can be obtained by first constructing a confidence interval for $\rho $ using Theorem 4.2 (taking $h=1$ ) and then by using $\beta (\rho ,h) = \rho ^h$ (monotone transformation). Unfortunately, the confidence interval $C_{la-ar}^*(h,1-\alpha )$ can be very wide asymptotically for certain data-generation processes and intermediate horizons. More concretely, for any $L>1,$ it can be shown $P_{\rho }\left ( [1/L,L] \subseteq C_{la-ar}^*(h,1-\alpha ) \right ) \to 1 $ as $n \to \infty $ when $\rho = 1-c_1/n$ (local-to-unit models) and $h \sim \sqrt {n}$ . We formally establish this result in Proposition B.2 in Appendix B. This result is similar to the ones presented in Appendix B.2.2 in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) for the lag-augmented AR bootstrap confidence interval of Inoue and Kilian (Reference Inoue and Kilian2020), which is a bootstrap confidence interval related but different to $C_{la-ar}^*(h,1-\alpha )$ .

5 ASYMPTOTIC REFINEMENTS

This section will impose conditions on the data-generating process that further restrict the class of AR(1) models relative to that considered in Section 4, ruling out local-to-unity and unit-root models. These conditions are explicit in Theorems 5.1 and 5.2, where we calculate the sizes of the ECP for the confidence intervals $C_n(h,1-\alpha )$ and $C_n^*(h,1-\alpha )$ defined in (10) and (11), respectively. The results of these theorems show that the LP-residual bootstrap can provide asymptotic refinements for confidence intervals, that is, the ECP of $C_n^{*}(h,1-\alpha )$ is $o(n^{-1})$ , whereas the ECP of $C_n(h,1-\alpha )$ is $O(n^{-1})$ .

Section 5.1 first provides an informal discussion of the elements and challenges involved in obtaining asymptotic refinements for confidence intervals with the LP-residual bootstrap. Section 5.2 then formalizes the discussion by giving conditions on the data-generating process (Assumption 5.1 and $\rho \in [-1+a, 1-a] $ for a given $a \in (0,1)$ ) that are sufficient to establish these asymptotic refinements (Theorems 5.1 and 5.2).

5.1 Informal Discussion on Asymptotic Refinements

This section gives an informal exposition on how a bootstrap method can provide asymptotic refinements for confidence intervals when the root is asymptotically pivotal, i.e., the asymptotic distribution of the root does not depend on any unknown parameters. The explanation below is not new (see Hall and Horowitz, Reference Hall and Horowitz1996; Horowitz, Reference Horowitz2001, Reference Horowitz2019; Lahiri, Reference Lahiri2003). It has the purpose of introducing the main elements and challenges that arise to obtain asymptotic refinements in the context of dependent data generated from an AR(1) model. It also describes the approach considered in this article (see Remark 5.6 for alternative methods).

Main Elements: For the sake of exposition, suppose the root $R_n(h)$ has an Edgeworth expansion up to an error of size $o(n^{-1})$ , that is, the distribution of the root $R_n(h)$ has an asymptotic expansion,

(19) $$ \begin{align} J_n(x,h,P,\rho) = \Phi(x) + \sum_{j=1}^2 n^{-j/2} q_j(x,h,P,\rho)\phi(x) + o\left(n^{-1}\right)\!, \end{align} $$

where $q_j(x,h,P,\rho )$ are polynomials in $x \in \textbf {R}$ such that (i) their coefficients are continuous function of moments of P and $\rho $ and (ii) $q_j(x,h,P,\rho ) = (-1)^{j+1} q_j(-x,h,P,\rho )$ for $j=1,2$ . Similarly, suppose the bootstrap root $R_n^*(h)$ has an Edgeworth expansion,

(20) $$ \begin{align} J_n(x,h,\hat{P}_n,\hat{\rho}_n) = \Phi(x) + \sum_{j=1}^2 n^{-j/2} q_j(x,h,\hat{P}_n,\hat{\rho}_n)\phi(x) + o_p\left(n^{-1}\right)\!, \end{align} $$

where $J_n(x,h,\cdot ,\cdot )$ is as in (7), $\hat {P}_n$ is the empirical distribution of the centered residuals defined in (13), and $\hat {\rho }_n$ is the estimator of $\rho $ defined in (12).

The approximations in (19) and (20) are commonly used to show that the bootstrap methods provide more accurate approximations than the asymptotic distribution theory (see Hall (Reference Hall1992) for a textbook reference for the case of i.i.d. data). We next sketch an informal calculation of the sizes of the ECP of the confidence intervals $C_n(h,1-\alpha )$ and $C_n^{*}(h,1-\alpha )$ .

The coverage probability of $C_n(h,1-\alpha )$ is equal to $P_{\rho } \left ( |R_n(h)| \le z_{1-\alpha /2} \right )$ by the definitions of $C_n(h,1-\alpha )$ and $R_n(h)$ in (10) and (6), respectively. Note that (19) and the properties of $q_j(\cdot ,h,P,\rho )$ imply that for any $x>0$ , we have

(21) $$ \begin{align} P_{\rho} \left( |R_n(h)| \le x \right) = 2\Phi(x) - 1 + n^{-1} 2q_2(x,h,P,\rho) \phi(x) + o\left(n^{-1}\right)\!. \end{align} $$

Taking $x = z_{1-\alpha /2}$ , we conclude the size of the ECP of $C_n(h,1-\alpha )$ is $O(n^{-1})$ .

Similarly, the coverage probability of $C_n^*(h,1-\alpha )$ is equal to $P_{\rho } \left ( |R_n(h)| \le c_n^*(h,1-\alpha ) \right )$ by the definitions in (11) and (6). Now, we will argue that

(22) $$ \begin{align} P_{\rho} \left( |R_n(h)| \le c_n^*(h,1-\alpha) \right) = P_{\rho} \left( |R_n(h)| \le c_n(h,1-\alpha)\right) + o\left(n^{-1}\right)\!, \end{align} $$

where $c_n(h,1-\alpha )$ is as in (8). This is sufficient to conclude that the size of the ECP of $C_n^*(h,1-\alpha )$ is $o(n^{-1})$ since $P_{\rho } \left ( |R_n(h)| \le c_n(h,1-\alpha )\right ) = 1-\alpha $ by definition. Using the properties of $q_j(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ and (20), we obtain

(23) $$ \begin{align} P_{\rho} \left( |R_{b,n}^*(h)| \le x \mid Y^{(n)}\right) &= 2\Phi(x) - 1 + n^{-1} 2q_2(x,h,\hat{P}_n,\hat{\rho}_n) \phi(x) + o_p\left(n^{-1}\right)~\notag\\ &= 2\Phi(x) - 1 + n^{-1} 2q_2(x,h,P,\rho) \phi(x) + o_p\left(n^{-1}\right)\!, \end{align} $$

where the last equality uses $q_2(x,h,\hat {P}_n,\hat {\rho }_n) = q_2(x,h,P,\rho ) + o_p\left (1\right ) $ . Note that (20) and (21) look similar. Taking $x = c_n(h,1-\alpha )$ in (21) and $x = c_n^*(h,1-\alpha )$ in (23), it can be concluded that $c_n^*(h,1-\alpha ) = c_n(h,1-\alpha ) + o_p\left (n^{-1}\right )$ , which will imply (22).

The informal explanation presented above suggests that the LP-residual bootstrap can provide asymptotic refinements when there exist valid Edgeworth expansions as in (19) and (20). We present in Section 5.2 conditions (Assumption 5.1 and $\rho \in [-1+a, 1-a] $ , where $a \in (0,1)$ ) under which the previous informal discussion can be formalized.

The Challenges: Edgeworth expansions as in (19) and (20) are not always available or valid in the context of AR(1) models. For instance, in the case of the local-to-unity and unit-root models, the Edgeworth expansion for the least-squares estimate of the AR(1) model defined in (12) is no longer valid (see Phillips, Reference Phillips2023). In this case, alternative asymptotic approximations were developed to prove asymptotic refinements of the bootstrap, e.g., Park (Reference Park2003, Reference Park2006) and Mikusheva (Reference Mikusheva2015). To our knowledge, there are no available theoretical results about valid Edgeworth expansions for the root $R_n(h)$ defined in (6) that can be applied directly.

Nevertheless, for stationary AR(1) models (when $\rho \in [-1+a,1-a]$ , $a \in (0,1)$ ), asymptotically valid Edgeworth expansions were obtained (see Phillips, Reference Phillips1977a, Reference Phillips1977b; Bose, Reference Bose1988, among others). Therefore, we will restrict our analysis to stationary AR(1) models to obtain valid Edgeworth expansions for the root $R_n(h)$ and its bootstrap version $R_n^*(h)$ when $\rho \in [-1+a,1-a]$ , $a \in (0,1)$ , and h is fixed.

5.2 Formal Conditions and Results

This section presents conditions under which the LP-residual bootstrap provides asymptotic refinements to the confidence interval. Under these conditions, we calculate the sizes of the ECP for $C_n(h,1-\alpha )$ and $C_n^*(h,1-\alpha )$ in Theorems 5.1 and 5.2, respectively.

The following assumption imposes stronger conditions on the distribution of the shocks P than the ones presented in Assumption 4.1. We use this assumption and $\rho \in [-1+a, 1-a]$ for some $a \in (0,1)$ to formalize the informal explanation about asymptotic refinements presented in Section 5.1.

Assumption 5.1.

  1. i) $\{u_t: 1 \le t \le n\}$ is a sequence of i.i.d. random variables with $E[u_t] =0$ .

  2. ii) $u_t$ has a positive continuous density.

  3. iii) $E[e^{x u_t}] \le e^{x^2 c_u^2}$ for all $|x| \le 1/c_u$ and $E[u_t^2] \ge C_{\sigma } $ for some constants $c_u, C_\sigma>0$ .

Part (i) of Assumption 5.1 imposes stronger conditions over the serial dependence of the shocks. This assumption is common for the theoretical analysis of the asymptotic refinement of the bootstrap method in autoregressive models. An incomplete list of previous research that uses this assumption includes Bose (Reference Bose1988), Park (Reference Park2003, Reference Park2006), and Mikusheva (Reference Mikusheva2015). Parts (ii) and (iii) of Assumption 5.1 are sufficient technical conditions on the distribution of the shocks P to establish the existence of the Edgeworth expansions presented in (19) and (20). Part (ii) implies that the distribution $J_n(\cdot ,h,P,\rho )$ defined in (7) is continuous and guarantees that a data-dependent version of the Cramér condition holds, which is a common condition to guarantee the existence of Edgeworth expansions (see Remark 5.4 for further discussion). Part (iii) implies that any sufficiently large number of moments exist and are uniformly bounded by a function of the constant $c_u$ , which is important to guarantee the Edgeworth expansion for the bootstrap distribution $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ . Although this condition is strong, it is not atypical in the literature of the asymptotic refinement of the bootstrap method with dependent data; for instance, Hall and Horowitz (Reference Hall and Horowitz1996) and Inoue and Shintani (Reference Inoue and Shintani2006) assume the existence of 33rd and 36th moments, respectively, while Andrews (Reference Andrews2002) assumes that all the moments exist.

We rely on Assumption 5.1, the approach and results presented in Bhattacharya and Ghosh (Reference Bhattacharya and Ghosh1978) and Bhattacharya (Reference Bhattacharya1987), and the general framework developed by Götze and Hipp (Reference Götze and Hipp1983) to prove the existence of Edgeworth expansions with dependent data. The framework of Götze and Hipp (Reference Götze and Hipp1983) requires weakly dependent data and verifying stronger regularity conditions than the ones needed in the case of i.i.d. data (see Hall (Reference Hall1992) and Lahiri (Reference Lahiri2003) for textbook references). Therefore, we restrict our analysis to data-generating processes with weak dependence (e.g., $|\rho | \le 1-a$ for some $a \in (0,1)$ ) in a similar way to previous research on asymptotic refinements involving dependent data that includes Bose (Reference Bose1988), Hall and Horowitz (Reference Hall and Horowitz1996), Lahiri (Reference Lahiri1996), Andrews (Reference Andrews2002, Reference Andrews2004), and Inoue and Shintani (Reference Inoue and Shintani2006). It is an open question whether there exist Edgeworth expansions as in (19) and (20) for the case of local-to-unity or unit-root models. See Remark 5.6 for further discussion on alternative methods and available results.

Theorem 5.1. Suppose Assumption 5.1 holds. Fix a given $h \in \textbf {N}$ and $a \in (0,1)$ . Then, for any $\rho \in [-1+a,1-a]$ , we have

(24) $$ \begin{align} | P_{\rho} \left( \beta(\rho,h) \in C_n(h,1-\alpha) \right) - (1-\alpha) | = O(n^{-1}), \end{align} $$

where $\beta (\rho ,h)$ is as in (2) and $C_n(h,1-\alpha )$ is as in (10).

The ECP of $C_n(h,1-\alpha )$ has a similar size as the one derived in our informal explanation in Section 5.1. Similar sizes of the ECP were obtained for symmetrical confidence intervals in the i.i.d. data case (see Hall, Reference Hall1992; Horowitz, Reference Horowitz2001, Reference Horowitz2019).

The proof of Theorem 5.1 is presented in Appendix A.3. It uses two main ideas developed previously in the literature. First, we approximate the distribution $J_n(\cdot ,h,P,\rho )$ by another distribution $\tilde {J}_n(\cdot ,h,P,\rho )$ up to an error of size $O\left (n^{-1-\epsilon }\right )$ for a fixed $\epsilon \in (0,1/2)$ ; similar approach has been used in Hall and Horowitz (Reference Hall and Horowitz1996) and Andrews (Reference Andrews2002, Reference Andrews2004). Second, we use that the distribution $\tilde {J}_n(\cdot ,h,P,\rho )$ admits an Edgeworth expansion up to an error of size $O\left (n^{-3/2}\right )$ based on the results of Bhattacharya and Ghosh (Reference Bhattacharya and Ghosh1978) and Götze and Hipp (Reference Götze and Hipp1983, Reference Götze and Hipp1994) (see Theorem B.2 in Appendix B.3). These two ideas guarantee the existence of the Edgeworth expansion presented in (19). We then conclude the proof by standard derivations similar to the one derived in our informal explanation presented in Section 5.1.

The next theorem shows that the LP-residual bootstrap provides asymptotic refinements to the confidence intervals. In other words, the size of the ECP of our bootstrap confidence interval defined in (11) for $\beta (\rho ,h)$ is $o(n^{-1})$ .

Theorem 5.2. Suppose Assumption 5.1 holds. Fix a given $h \in \textbf {N}$ and $a \in (0,1)$ . Then, for any $\rho \in [-1+a,1-a]$ and $\epsilon \in (0,1/2)$ , we have

(25) $$ \begin{align} | P_{\rho} \left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) - (1-\alpha) | = o \left( n^{-(1+\epsilon)} \right)\!, \end{align} $$

where $\beta (\rho ,h)$ is as in (2) and $C_n^*(h,1-\alpha )$ is as in (11).

Theorem 5.2 presents the size of the ECP of the confidence interval $C_n^*(h,1-\alpha )$ in (25). This is similar to the one derived in our informal explanation in Section 5.1, but it is typically larger than those obtained for the ECP of symmetrical confidence intervals using bootstrap methods in the i.i.d. data case (see Hall, Reference Hall1992; Horowitz, Reference Horowitz2001, Reference Horowitz2019).

The proof of Theorem 5.2 is presented in Appendix A.4. It relies on two claims: the existence of the Edgeworth expansion for the distribution $J_n(\cdot ,h,P,\rho )$ and the existence of constants $C_1$ and $C_2$ such that $P_{\rho }(|\Delta _n|> C_1 n^{-(1+\epsilon )} ) \le C_2 n^{-(1+\epsilon )} $ , where $\Delta _n = c_n^*(h,1-\alpha )-c_n(h,1-\alpha )$ , and $c_n(h,1-\alpha )$ and $c_n^*(h,1-\alpha )$ are defined in (8) and (15), respectively. We next sketch the proof based on those two claims. We can derive

$$ \begin{align*} P_{\rho} \left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) &= P_{\rho} \left( |R_n(h)| \le c_n^*(h,1-\alpha) \right) \\&= P_{\rho} \left( |R_n(h)| \le c_n(h,1-\alpha) + \Delta_n, \left| \Delta_n \right|\le C_1 n^{-(1+\epsilon)} \right)\\& \quad + O\left( n^{-(1+\epsilon)} \right)\\&= 1-\alpha + O\left( n^{-(1+\epsilon)} \right)\!, \end{align*} $$

where the last equality follows from the existence of the Edgeworth expansion for the distribution $J_n(\cdot ,h,P,\rho )$ (our first claim), which implies

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le c_n(h,1-\alpha) + O\left( n^{-(1+\epsilon)} \right) \right) = 1-\alpha + O\left( n^{-(1+\epsilon)} \right)\!.\end{align*} $$

Note that the first claim follows from Theorem 5.1. To prove our second claim, we first show that there is an event $E_n$ such that (i) $J_n(\cdot ,h,\hat {P}_n,\hat {\rho }_n)$ has an Edgeworth expansion as in (20) conditional on $E_n$ and (ii) the probability of the complement of $E_n$ is equal to $O\left (n^{-(1+\epsilon )}\right )$ for any $\epsilon \in (0,1/2)$ (see Lemma B.5 in Appendix B.1). We then follow standard arguments in the literature to prove this claim. Finally, note that $O(n^{-(1+\epsilon )})$ for any $\epsilon \in (0,1/2)$ is equivalent to $o(n^{-(1+\epsilon )})$ for any ${\epsilon \in (0,1/2)}$ , which is the error stated in Theorem 5.2.

Remark 5.1. The bootstrap methods proposed in Hall and Horowitz (Reference Hall and Horowitz1996) and Andrews (Reference Andrews2002) can be adapted for the construction of confidence intervals for the impulse response $\beta (h,\rho )$ defined in (2). Four points based on their framework and results deserve further discussion. First, their bootstrap method consists of the nonoverlapping block bootstrap scheme (Carlstein (Reference Carlstein1986)) and overlapping block bootstrap (Kunsch (Reference Kunsch1989)). Second, they show that their bootstrap methods provide asymptotic refinements to the critical values of t-tests based on generalized method of moments (GMM) estimators $\hat {\theta }_T$ and weakly dependent data ${\{Z_t:1 \le t \le n\}}$ . One of their main conditions is that the series of moment functions $\{g(Z_t,\theta ): t \ge 1 \}$ are uncorrelated beyond some finite lags, i.e., for some $\kappa>0,$ we have $E[g(Z_t,\theta )g(Z_s,\theta )'] = 0$ for any $t,s\ge 1$ such that $|t-s|>\kappa $ . Third, the LP estimator $\hat {\beta }_n(h)$ defined in (3) can be presented as a GMM estimator using the following dependent data $\{Z_t = (y_{t-1}, y_t, y_{t+h}) : 1 \le t \le n\}$ and moment function: $g(y_{t+h},x_t,\theta )=(y_{t+h}-\theta x_t)x_t$ , where $x_t = (y_t, y_{t-1})'$ . Then, we can invoke their results and use their bootstrap methods but only for the case of $|\rho |<1$ and under additional assumptions. Note that their main condition can be verified with $\kappa = h$ . Fourth, we can construct confidence intervals for $\beta (\rho ,h)$ based on their asymptotic distribution theory.

Remark 5.2. As we mentioned in Remark 5.1, we can use the bootstrap methods presented in Hall and Horowitz (Reference Hall and Horowitz1996) and Andrews (Reference Andrews2002, Reference Andrews2004) to construct confidence intervals for $\beta (\rho ,h)$ since the LP estimator $\hat {\beta }_n(h)$ defined in (3) can be presented as a GMM estimator. Their results provide sizes of the ECP of these confidence intervals that are qualitatively similar to the one found in Theorem 5.2.

Remark 5.3. The size of the ECP of $C_{per-t,n}^{*}(h,1-\alpha )$ is $O(n^{-1})$ . We presented and discussed the equal-tailed percentile-t confidence interval $C_{per-t,n}^{*}(h,1-\alpha )$ in Remark 4.1. To compute the size of its ECP, we can use the existence of the Edgeworth expansions presented in (19) and (20) and Theorem 5.2 in Hall (Reference Hall1992). The size of the ECP of $C_{per-t,n}^{*}(h,1-\alpha )$ is similar to the one obtained in (24) for the ECP of $C_n(h,1-\alpha )$ ; therefore, the LP-residual bootstrap does not provide asymptotic refinement for equal-tailed percentile-t confidence intervals. Similar conclusions were obtained for the case of i.i.d. data (see Hall, Reference Hall1992; Horowitz, Reference Horowitz2001, Reference Horowitz2019).

Remark 5.4. We use part (ii) of Assumption 5.1 to verify that a dependent-data version of the Cramer condition required in Götze and Hipp (Reference Götze and Hipp1983) holds, which is an important condition for the existence of the Edgeworth expansion in the dependent-data case. However, verifying that condition is quite difficult in general, as pointed out by Hall and Horowitz (Reference Hall and Horowitz1996) and Götze and Hipp (Reference Götze and Hipp1994), among others. Therefore, we proceed in two steps based on the results by Götze and Hipp (Reference Götze and Hipp1994) that propose simple and verifiable conditions to guarantee the conditions required by Götze and Hipp (Reference Götze and Hipp1983), including the dependent-data version of the Cramer condition. We first approximate the distribution $J_n(\cdot ,h,P,\rho )$ by a distribution $\tilde {J}_n(\cdot ,h,P,\rho )$ . We then use part (ii) of Assumption 5.1 to verify the conditions required in Theorem 1.2 of Götze and Hipp (Reference Götze and Hipp1994), which guarantee the existence of Edgeworth expansion for the distribution $\tilde {J}_n(\cdot ,h,P,\rho )$ .

Remark 5.5. For strictly stationary data-generating processes, the results in Theorems 5.1 and 5.2 can be extended to the family of VAR models that satisfy similar assumptions to the ones presented in Assumption 5.1, which are stronger than Assumptions 1 and 2 in Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021). These extensions can be shown by verifying the conditions required in Götze and Hipp (Reference Götze and Hipp1994). We leave the details of a formal proof for the VAR models for future research.

Remark 5.6. An alternative method for asymptotically approximating a finite sample distribution is the stochastic embedding and strong approximation principle used in Park (Reference Park2003, Reference Park2006) and Mikusheva (Reference Mikusheva2015). Using this method in the local-to-unit asymptotic framework for the AR(1) model, Mikusheva (Reference Mikusheva2015) showed that the grid bootstrap version of the t-statistic approximates its finite sample distribution up to an error of size $o(n^{-1/2})$ . It is an open question whether these techniques can be adapted to show that the LP-residual bootstrap provides asymptotic refinements to the confidence intervals when $\rho =1$ .

6 SIMULATION STUDY

We examine the finite sample performance of $C_n^*(h,1-\alpha )$ defined in (11) using different data-generating processes. We consider a sample size $n = 95$ , which is the median sample size based on 71 papers that have utilized the LP approach (see Herbst and Johannsen, Reference Herbst and Johannsen2024). Additionally, we examine other confidence intervals presented in the article.

6.1 Monte Carlo Design

We use four designs for the distribution of the shocks $\{u_t:1 \le t \le n\}$ and two values for the parameter $\rho \in \{0.95, 1\}$ in our Monte Carlo simulation. The shocks are defined according to the GARCH(1,1) model:

$$ \begin{align*} u_t = \tau_t v_t, \quad \tau_t^2 = \omega_0 + \omega_1 u_{t-1}^2 + \omega_2 \tau_{t-1}^2, \quad v_t \text{ are } i.i.d., \end{align*} $$

where the distribution of $v_t$ and the parameter vector $(\omega _0,\omega _1,\omega _2)$ are specified as follows:

  1. Design 1: $v_t \sim N(0,1)$ , $\omega _0=1$ , and $\omega _1=\omega _2=0$ .

  2. Design 2: $v_t \sim N(0,1)$ , $\omega _0=0.05$ , $\omega _1= 0.3$ , and $\omega _2=0.65$ .

  3. Design 3: $v_t \sim t_4/\sqrt {2}$ , $\omega _0=1$ , and $\omega _1=\omega _2=0$ .

  4. Design 4: $v_t|B_t=j \sim N(m_j,\sigma _j^2)$ , where $B_t \in \{0,1\}$ , $B_t = 1$ with probability ${p = 0.25}$ , $m_0 = 2/\sigma _2$ , $m_1 =-6/\sigma _2$ , $\sigma _0 = 0.5/\sigma _2$ , $\sigma _1 = 2/\sigma _2$ , and $\sigma _2^2 = p(m_1^2+\sigma _1)+(1-p)(m_0^2+\sigma _0)$ , $\omega _0=0.05$ , $\omega _1= 0.3$ , and $\omega _2=0.65$ .

We consider nine different confidence intervals for each design and each value of $\rho $ . All our confidence intervals use the HC standard errors $\hat {s}_n(h)$ defined in (4). Additionally, we consider alternative HC standard errors $\hat {s}_{j,n}(h)$ defined as

$$ \begin{align*} \hat{s}_{j,n}(h) \equiv \left( \sum_{t=1}^{n-h} \hat{u}_t(h)^2 \right)^{-1/2} \left(\sum_{t=1}^{n-h} \hat{\xi}_{j,t}(h)^2 \hat{u}_t(h)^2 \right)^{1/2} \left( \sum_{t=1}^{n-h} \hat{u}_t(h)^2 \right)^{-1/2}, \end{align*} $$

for $j=2,3$ , where $\hat {\xi }_{2,t}(h)^2 = \hat {\xi }_{t}(h)^2/(1-\mathbb {P}_{h,tt})$ and $\hat {\xi }_{3,t}(h)^2 = \hat {\xi }_{t}(h)^2/(1-\mathbb {P}_{h,tt})^2$ . We use the projection matrix $\mathbb {P}_h = \mathbb {X}_h (\mathbb {X}_h'\mathbb {X}_h)^{-1}\mathbb {X}_h'$ , where $\mathbb {X}_h$ is a matrix with row elements equal to $(\hat {u}_t(h),~ y_{t-1})$ for $t=1,\ldots ,n-h$ . The confidence intervals that we use are listed below.

  1. 1. RB: confidence interval as in (11) based on the LP-residual bootstrap.

  2. 2. RB ${}_{\boldsymbol {per-t}}$ : equal-tailed percentile-t confidence interval as in (18). It is based on the LP-residual bootstrap and discussed in Remark 4.1.

  3. 3. RB ${}_{\boldsymbol {hc3}}$ : confidence interval as in (11) but using $\hat {s}_{3,n}(h)$ and $c_{3,n}^*(h,1-\alpha )$ instead of $\hat {s}_{n}(h)$ and $c_n^*(h,1-\alpha )$ , where $c_{3,n}^*(h,1-\alpha )$ is computed as in Section 3.1 but using $\hat {s}_{3,n}^*(h)$ instead of $\hat {s}_{n}^*(h)$ .

  4. 4. WB: confidence interval as in (11) but using $c_n^{wb,*}(h,1-\alpha )$ instead of ${c_n^*(h,1-\alpha )}$ , where $c_n^{wb,*}(h,1-\alpha )$ is based on the LP-wild bootstrap (see Remark 3.2).

  5. 5. WB ${}_{\boldsymbol {per-t}}$ : equal-tailed percentile-t confidence interval as in (18) but using $q_n^{wb,*}(h,\alpha _0)$ instead of $q_n^*(h,\alpha _0)$ , where $q_n^{wb,*}(h,\alpha _0)$ is based on the LP-wild bootstrap discussed in Remark 3.2.

  6. 6. GB ${}_{\boldsymbol {LR}}$ : confidence interval based on the grid bootstrap presented in Section 3.3 in Mikusheva (Reference Mikusheva2012). It uses the LR statistic.

  7. 7. AA: standard confidence interval as in (10).

  8. 8. AA ${}_{\boldsymbol {hc2}}$ : standard confidence interval as in (10) but using $\hat {s}_{2,n}(h)$ instead of $\hat {s}_{n}(h)$ .

  9. 9. AA ${}_{\boldsymbol {hc3}}$ : standard confidence interval as in (10) but using $\hat {s}_{3,n}(h)$ instead of $\hat {s}_{n}(h)$ .

6.2 Discussion and Results

In all the designs, the shocks have zero mean and variance one. Designs 1 and 2 verify Assumption 4.1 presented in Section 4. Design 1 also verifies Assumption 4.2 due to Proposition B.1 in Appendix B.2. Assumption 4.2 can be tedious to verify in general since it involves computing a probability for all the parameters $\rho $ in the parameter space and taking their infimum. In contrast, designs 3 and 4 do not verify all the parts of Assumption 4.1. Design 3 considers shocks without a fourth moment, i.e., it does not verify part (iv) of Assumption 4.1, which was a regularity condition. Design 4 considers a distribution of the shocks (GARCH errors with asymmetric v and nonzero skewness) that lie outside the class of conditional heteroskedastic processes that we consider in this article, i.e., it does not verify part (ii) of Assumption 4.1. As we discussed in Remark 2.1, part (ii) of Assumption 4.1 was a sufficient condition for the validity of the HC standard errors $\hat {s}_n(h)$ in the construction of confidence intervals.

Tables 1 and 2 report the coverage probabilities (in %) of our simulations. Columns are labeled as the confidence intervals we specified in Section 6.1. For all the designs on the distribution of the shock and values of $\rho $ , we use 5,000 simulations to generate data with a sample size $n = 95$ based on the AR(1) model (1). In each simulation, we compute the nine confidence intervals described above for horizons $h \in \{1,6,12,18\}$ . The confidence intervals have a nominal level equal to $1-\alpha = 90\%$ . The bootstrap critical values are computed using $B=1,000$ as described in Remark 3.1. We summarize our findings from the simulations below.

Table 1 Coverage probability (in %) of confidence intervals for $\beta (\rho ,h)$ with a nominal level of 90% and $n = 95$

Note: In total, 5,000 simulations and 1,000 bootstrap iterations.

Table 2 Coverage probability (in %) of confidence intervals for $\beta (\rho ,h)$ with a nominal level of 90% and $n = 95$

Note: In total, 5,000 simulations and 1,000 bootstrap iterations.

Five features of Table 1 deserve discussion. First, it shows that our recommended confidence interval RB has a coverage probability closer to 90% than the confidence intervals AA, AA ${}_{\boldsymbol {hc2}}$ , and AA ${}_{\boldsymbol {hc3}}$ for all designs 1 and 2, values of $\rho $ , and horizons h, with some few exceptions. The lowest coverage probabilities of RB, AA, AA ${}_{\boldsymbol {hc2}}$ , and AA ${}_{\boldsymbol {hc3}}$ are $85\%$ , $77\%$ , $78\%$ , and $79\%$ , respectively, and occur when $\rho =1$ and horizon $h=18$ . Second, RB and RB ${}_{\boldsymbol {hc3}}$ have better performance than RB ${}_{\boldsymbol {per-t}}$ , especially when $\rho = 1$ and the horizon is a significant fraction of the sample size ( $h \in \{12,18\}$ ). Third, WB and WB ${}_{\boldsymbol {per-t}}$ have larger coverage probability than RB for all designs 1 and 2, values of $\rho $ , and horizons h, with some few exceptions. The larger coverage of WB and WB ${}_{per-t}$ is associated with a larger median length of their confidence intervals, as we reported in Table E.1 in the Supplementary Material. Fourth, AA ${}_{hc3}$ presents a coverage probability closer to $90\%$ and larger than AA and AA ${}_{hc2}$ for all designs 1 and 2, values of $\rho $ , and horizons h. This finding suggests that using $\hat {s}_{3,n}(h)$ instead of $\hat {s}_n(h)$ can improve the coverage probability of the confidence interval; however, confidence intervals based on bootstrap methods (e.g., RB and WB ${}_{per-t}$ ) report coverage probability closer to $90\%$ . Fifth, GB ${}_{LR}$ has a coverage probability close to 90% on design 1 (i.i.d. shocks), while it has some distortions on design 2 that are larger on $\rho = 0.95$ . As we mentioned in Remark 4.2, it is unknown if the grid bootstrap is valid for design 2. The coverage probability of GB ${}_{LR}$ is constant across horizons because the LR statistic is invariant to monotonic transformations (see Section 4.3 and footnote 6 in Mikusheva (Reference Mikusheva2012) for more details).

Table 2 presents results for designs 3 and 4. Our findings for design 3 are qualitatively similar to Table 1, which was discussed above. This suggests that failing part (iv) of Assumption 4.1 (a regularity condition) does not have a major effect on the coverage probability of the confidence intervals that we considered. In contrast, design 4 shows that some of our qualitative findings can change if we fail to verify part (ii) of Assumption 4.1. This result is consistent with existing theory since this assumption was a sufficient condition for the validity of confidence intervals that use HC standard errors $\hat {s}_n(h)$ (see Remark 2.1). In particular, RB ${}_{per-t}$ has a coverage probability closer to $90\%$ and larger than RB and RB ${}_{hc3}$ . The small sample size ( $n=95$ ) does not explain the findings for design 4. We obtain similar results for a sample size $n=240$ in Table E.3 in the Supplementary Material.

Finally, Table E.2 in the Supplementary Material reports the statistical power of the confidence intervals specified in Section 6.1. Here, we refer by statistical power to the coverage probabilities (in %) of (size-adjusted) confidence intervals for parameters different than the true one. In this sense, a low coverage probability of a confidence interval is desirable. We find all the confidence intervals have coverage probability around $80\%$ on horizon $h=1$ and designs 1–3, which suggests they have statistical power at $h=1$ . We also notice that RB ${}_{per-t}$ , WB ${}_{per-t}$ , and GB ${}_{LR}$ have a coverage probability strictly lower than $90\%$ for horizon $h=6$ and designs 1–3. Moreover, they have a lower coverage probability than all the other confidence intervals. Finally, all the confidence intervals have coverage probability above $90\%$ on design 4, with the exception of GB ${}_{LR}$ for horizon $h=1$ .

7 LP-RESIDUAL BOOTSTRAP FOR VAR MODELS

This section describes the LP-residual bootstrap method to construct confidence intervals for a scalar function of impulse responses of VAR(p) models, where p denotes the number of lags. More concretely, we propose the confidence interval in (26) for $\nu ' \beta _{h,i}$ , where $\beta _{h,i} \in \mathbf {R}^k$ is the vector containing all the impulse response coefficients of the reduced-form shocks in the variable i at h periods in the future. Here, $\nu \in \mathbf {R}^k\setminus \{ 0 \}$ is a user-specified vector, e.g., $\nu = e_j$ (the j-th unit vector) implies that $\nu ' \beta _{h,i}$ is the impact of the j-th reduced-form shock in the variable i at h periods in the future.

The confidence interval for $\nu ' \beta _{h,i}$ is defined as

(26) $$ \begin{align} C_n^{*}(h,1-\alpha) \equiv \left[ \nu'\hat{\beta}_{i,n}(h) - c_n^*(h,1-\alpha)~\hat{s}_{i,n}(h,\nu),~\nu' \hat{\beta}_{i,n}(h) + c_n^*(h,1-\alpha)~ \hat{s}_{i,n}(h,\nu)\right]\!, \end{align} $$

where $\hat {\beta }_{i,n}(h)$ , $\hat {s}_{i,n}(h,\nu )$ , and $c_n^*(h,1-\alpha )$ are defined in (27), (28), and (31), respectively.

Let $\{y_t \in \mathbf {R}^k: 1 \le t \le n\}$ be the available time-series data. Suppose the data have been demeaned. Denote $X_t = (y_{t-1}',\ldots ,y_{t-p}')'$ for all $t=p+1,\ldots , n$ . Let $\hat {\beta }_{i,n}(h)$ be obtained from an OLS regression between $y_{i,t+h}$ and $(y_t',X_t')$ ,

(27) $$ \begin{align} y_{i,t+h} = \hat{\beta}_{i,n}(h)' y_t + \hat{\gamma}_{i,n}(h) X_t + \hat{\xi}_{i,t}(h). \end{align} $$

Let $ \hat {s}_{i,n}(h,\nu )$ be the standard error for $\nu '\hat {\beta }_{i,n}(h)$ defined by

(28) $$ \begin{align} \hat{s}_{i,n}(h,\nu) = \frac{1}{n-h-p} \left \{ \nu' \hat{\Sigma}(h)^{-1} \left( \sum_{t=p+1}^{n-h} \hat{\xi}_{i,t}(h)^2 \hat{u}_t(h) \hat{u}_t(h)' \right) \hat{\Sigma}(h)^{-1} \nu \right\}^{1/2}, \end{align} $$

where

$$ \begin{align*}\hat{u}_t(h) = y_t - \hat{A}(h) X_t, \quad \hat{A}(h) = \left( \sum_{t=p+1}^{n-h} y_t X_t' \right) \left( \sum_{t=p+1}^{n-h} X_t X_t' \right)^{-1}\end{align*} $$

and

$$ \begin{align*}\hat{\Sigma}(h) = \frac{1}{n-h-p} \sum_{t=p+1}^{n-h} \hat{u}_t(h) \hat{u}_t(h)'. \end{align*} $$

Finally, let $c_n^*(h, 1-\alpha )$ be the bootstrap critical value involving the following steps:

  1. Step 1: Estimate a VAR(p) model with the data $Y^{(n)}$ using linear regression,

    $$ \begin{align*}y_t = \hat{A}_n X_t + \hat{u}_t, ~ t = p+1,\ldots, n,\end{align*} $$

    where

    (29) $$ \begin{align} \hat{A}_n = \left( \sum_{t=p+1}^{n} y_t X_t' \right) \left( \sum_{t=p+1}^{n} X_t X_t' \right)^{-1}, \end{align} $$

    and compute the centered residuals

    (30) $$ \begin{align} \left\{ \tilde{u}_t \equiv \hat{u}_t - \frac{1}{n-p}\sum_{t=p+1}^n \hat{u}_t : p+1 \le t \le n \right \}\!. \end{align} $$
  2. Step 2: Generate B new samples of size n using (29) and (30). Define the sample as

    $$ \begin{align*} y_{b,t}^* = \sum_{\ell=1}^p \hat{A}_{n,\ell} ~ y_{b,t-\ell}^* + u_{b,t}^*, \quad t= p + 1,\ldots,n, \end{align*} $$

    where the initial p observations $(y_{b,1}^*,\ldots ,y_{b,p}^*)$ are drawn at random from the $n-p+1$ blocks of p consecutive observations in the original data. Here, $\hat {A}_n = (\hat {A}_{n,1},\ldots , \hat {A}_{n,p})$ are matrices estimated in (29) and $\{u_{b,t}^* : 1 \le t \le n\}$ is a random sample from the empirical distribution of the centered residuals defined in (30). The new sample $\{ y_{b,t}^* : 1 \le t \le n \} $ is called the bootstrap sample.

  3. Step 3: Compute $\hat {\beta }_{b,i,n}^*(h)$ and $\hat {s}_{b,i,n}^*(h)$ as in (27) and (28) using the lag-augmented LP regression and the bootstrap sample $\{ y_{b,t}^* : 1 \le t \le n \} $ for each ${b=1,\ldots , B}$ . Define

    $$ \begin{align*} R_{b,n}^*(h,\nu) = \frac{\nu'\hat{\beta}_{b,i,n}^*(h) - \nu'\beta_i(\hat{A}_n,h)}{\hat{s}_{b,i,n}^*(h,\nu)},~ b=1,\ldots, B, \end{align*} $$

    where $\beta _i(A,h) \in \mathbf {R}^k$ is the impulse response of all reduced-form shocks in the variable i at horizon h implied by the VAR(p) model with coefficients $A= (A_1,\ldots ,A_p)$ . Here, $\hat {A}_n$ is as in (29).

  4. Step 4: Compute the $1-\alpha $ quantile of the B draws of $R_{b,n}^*(h,\nu ) $ . Denote this by

    (31) $$ \begin{align} c_n^{*}(h, 1-\alpha) \equiv \inf \left \{ u \in \textbf{R} : \frac{1}{B} \sum_{b=1}^B I\{ |R_{b,n}^*(h,\nu)| \le u \} \ge 1-\alpha \right\}\!. \end{align} $$

The theoretical properties of the bootstrap confidence interval defined in (26) are unknown for general VAR models. However, Monte Carlo simulations presented in Appendix E.1 of the Supplementary Material suggest that confidence intervals based on the LP-residual bootstrap perform better in terms of coverage probability than those based on first-order asymptotic theory. Remarks 4.4 and 5.5 provide further discussion on how to extend some of the results presented in this article to general VAR models.

Remark 7.1. Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) proposed a different bootstrap confidence interval for the impulse response coefficients of VAR(p) models. As we discussed in Remark 3.2, they use a wild bootstrap procedure—which we refer to as the LP wild bootstrap—to define the bootstrap shocks used to generate the bootstrap sample with an estimated VAR model (similar to Step 2 above). They use the LP wild bootstrap to construct equal-tailed percentile-t confidence intervals that differ from the symmetric percentile-t confidence intervals defined in (26), which we recommend for the same reasons presented in Remark 3.3 and based on our theoretical results for the AR(1) model. To our knowledge, the theoretical properties of the LP wild bootstrap procedure and the confidence intervals proposed by Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) remain unknown. We include their recommended confidence intervals in the simulations presented in Appendix E.1 of the Supplementary Material.

8 CONCLUDING REMARKS

This article contributes to a growing literature on confidence interval construction for impulse response coefficients based on the LP approach. Specifically, we propose the LP-residual bootstrap method to construct confidence intervals for the impulse response coefficients of AR(1) models at intermediate horizons. We prove two theoretical properties of this method: uniform consistency and asymptotic refinements. For a large class of AR(1) models that allow for a unit root, conditional heteroskedasticity of unknown form, and martingale difference shocks, we show that the proposed confidence interval $C_n^*(h,1-\alpha )$ defined in (11) has an asymptotic coverage probability equal to its nominal level $1-\alpha $ uniformly over the parameter space (e.g., $\rho \in [-1,1]$ ) and a wide range of intermediate horizons. For a restricted class of AR(1) models (e.g., $|\rho |\le 1-a,$ where $a \in (0,1)$ and i.i.d. shocks with positive continuous density), we demonstrate that the ECP of $C_n^*(h,1-\alpha )$ has size $o(n^{-1})$ , that is, the LP-residual bootstrap provides asymptotic refinements to the confidence intervals.

This article considered the AR(1) model as the first step in understanding the theoretical properties of the LP-residual bootstrap. Three possible directions exist for future research. First, the uniform consistency of the LP-residual bootstrap method is an open question for the general VAR model. This bootstrap method is described in Section 7. Second, the asymptotic refinement property of this method is unknown for the unit-root model ( $\rho =1$ ) or general VAR models. Third, future work is needed to prove the uniform consistency of the LP-wild bootstrap discussed in Remark 3.2.

A PROOFS OF RESULT IN MAIN TEXT

A.1 Proof of Theorem 4.1

We prove a stronger result:

$$ \begin{align*}\sup_{|\rho|\le 1} P_{\rho}\left( \sup_{h \le h_n} ~ \sup_{|\tilde{\rho}|\le 1} ~ \sup_{x\in \mathbf{R}} | J_n(x,h,P,\tilde{\rho}) - J_n(x,h,\hat{P}_n,\hat{\rho}_n) |> \epsilon \right) \to 0 \quad \text{as} \quad n \to \infty,\end{align*} $$

which is sufficient to conclude (16). The proof has three steps.

Step 1: Let $E_{n,1} = \{ g(\rho ,n) ~n^{1/2}~ | \hat {\rho }_n - \rho |> M\}$ , $E_{n,2} = \{ | n^{-1} \sum _{t=1}^n \tilde {u}_t^2 - \sigma ^2 |> \sigma ^2/2 \}$ , and $E_{n,3} = \{ n^{-1} \sum _{t=1}^n \tilde {u}_t^4> \tilde {K}_4 \} $ be events, where M and $\tilde {K}_4$ are constants defined next. Fix $\eta>0$ . We use Lemma B.1 to guarantee the existence of M, $\tilde {K}_4$ , and $N_0 = N_0(\eta )$ such that $ P_{\rho }(E_{n,j}) < \eta /3$ for $j=1,2,3$ , $n \ge N_0$ and $\rho \in [-1,1]$ . Define $E_n = E_{n,1}^c \cap E_{n,2}^c \cap E_{n,3}^c$ . By construction $P_{\rho }(E_n )> 1 - \eta $ for $n \ge N_0$ and for any $\rho \in [-1,1]$ .

Step 2: Conditional on the event $E_n$ , we have $ |\hat {\rho }_n - \rho | \le M n^{-1/2}/g(\rho ,n)$ for $n \ge N_0$ and for any $\rho \in [-1,1]$ . Therefore, conditional on the event $E_n$ , we can use Lemma B.2 to conclude the existence of $\tilde {M}$ and $N_1 \ge N_0$ such that $ |\hat {\rho }_n| \le 1 + \tilde {M}/n$ for all $n \ge N_1$ . Note also that conditional on the event $E_n$ , we have that distribution $\hat {P}_n$ of the centered residuals defined in (13) verifies Assumption B.1 taking $K_4 = M$ , $\underline {\sigma } = \sigma ^2/2$ , and $\overline {\sigma } = 3\sigma ^2/2$ , i.e., $\hat {P}_n \in \mathbf {P}_{n,0}$ , where $\mathbf {P}_{n,0}$ is defined in Appendix B.2.

Step 3: We use Theorem B.1 taking $M = \tilde {M}$ . This implies that for any $\epsilon>0$ , there exists $N_2 =N_2(\epsilon ,\eta ) \ge N_1$ such that $ \sup _{x \in \mathbf {R}} ~ \left | J_n(x,h,P_n,\rho ) - \Phi (x) \right | < \epsilon /2, $ for any $n \ge N_2$ , $|\rho | \le 1 + \tilde {M}/n$ , $h \le h_n \le n$ and $h_n = o \left (n\right )$ , and $P_n \in \mathbf {P}_{n,0}$ . Conditional on $E_n$ , we have $\hat {P}_n \in \mathbf {P}_{n,0}$ due to Step 2, then

(A.1) $$ \begin{align} \sup_{h \le h_n} \: \sup_{x \in \mathbf{R}} ~ \left| J_n(x,h,\hat{P}_n,\hat{\rho}_n) - \Phi(x) \right| < \epsilon/2, \end{align} $$

for any $n \ge N_2$ , $ h_n \le n,$ and $h_n = o\left (n\right ) $ . By (9), there exists $N_3 \ge N_2$ such that

$$ \begin{align*}\sup_{h \le h_n} \: \sup_{ \tilde{\rho} \in [-1,1]} ~ \sup_{x \in \mathbf{R}} ~ \left| J_n(x,h,P,\tilde{\rho}) - \Phi(x) \right| < \epsilon/2,\end{align*} $$

for any $n \ge N_3$ , $ h_n \le n$ , and $h_n = o\left (n\right )$ . Therefore, conditional on the event $E_n$ and using the triangle inequality, we conclude that

$$ \begin{align*}\sup_{h \le h_n} \: \sup_{\tilde{\rho} \in [-1,1]} ~ \sup_{x \in \mathbf{R}} ~ \left| J_n(x,h,P,\tilde{\rho}) - J_n(x,h,\hat{P}_n,\hat{\rho}_n) \right| < \epsilon,\end{align*} $$

for any $n \ge N_3$ , $h_n \le n$ , and $h_n = o\left (n\right )$ . Since $P_{\rho }(E_n) \ge 1-\eta $ for any $\rho \in [-1,1]$ , the previous conclusion is equivalent to

$$ \begin{align*}\sup_{\rho \in [-1,1]} ~ P \left( \sup_{h \le h_n} \: \sup_{\tilde{\rho} \in [-1,1]} ~ \sup_{x\in \mathbf{R}} \: \left| J_n(x,h,P,\tilde{\rho}) - J_n(x,h,\hat{P}_n,\hat{\rho}_n) \right| < \epsilon ~ \right ) \ge 1- \eta,\end{align*} $$

for any $n \ge N_3$ , $h_n \le n,$ and $h_n = o\left (n\right )$ , which concludes the proof of the theorem.

A.2 Proof of Theorem 4.2

By Lemma B.3, for any $\epsilon>0$ , there exists $N_0 = N_0(\epsilon )$ such that

(A.2) $$ \begin{align} P_{\rho} \left( z_{1-\alpha/2-\epsilon/2} \le c_n^{*}(h,1-\alpha) \le z_{1-\alpha/2+\epsilon/2} \right) \ge 1 - \epsilon, \end{align} $$

for any $n \ge N_0$ , $\rho \in [-1,1]$ and any $h \le h_n \le n $ and $h_n = o\left (n\right )$ . Assumptions 4.1 and 4.2 guarantee (9); therefore, there exist $N_1 \ge N_0$ such that

(A.3) $$ \begin{align} P_{\rho}\left( |R_n(h)| \le z_{1-\alpha/2+\epsilon/2} \right) \le 1-\alpha+2\epsilon \quad \text{and} \quad P_{\rho}\left( |R_n(h)| \le z_{1-\alpha/2-\epsilon/2} \right) \ge 1-\alpha-2\epsilon, \end{align} $$

for any $n \ge N_1$ , $\rho \in [-1,1]$ and any $h \le h_n \le n $ and $h_n = o\left (n\right )$ . Consider the derivation

$$ \begin{align*} P_{\rho}\left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) &= P_{\rho}\left( |R_n(h)| \le c_n^*(h,1-\alpha) \right) \\&= P_{\rho}\left( |R_n(h)| \le c_n^*(h,1-\alpha) , c_n^*(h,1-\alpha)> z_{1-\alpha/2 +\epsilon/2} \right) \\&~+ P_{\rho}\left( |R_n(h)| \le c_n^*(h,1-\alpha) , c_n^*(h,1-\alpha) \le z_{1-\alpha/2+\epsilon/2} \right) \\&\le P_{\rho}\left( c_n^*(h,1-\alpha) > z_{1-\alpha/2 + \epsilon/2} \right) + P_{\rho}\left( |R_n(h)| \le z_{1-\alpha/2+\epsilon/2} \right) \\& \le \epsilon + 1-\alpha + 2\epsilon, \end{align*} $$

where the last inequality follows by (A.2) and (A.3). Similarly, we obtain the inequality

$$ \begin{align*}P_{\rho}\left( |R_n(h)| \le z_{1-\alpha/2-\epsilon/2} \right) \le P_{\rho}\left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) + P_{\rho}\left( c_n^*(h,1-\alpha) < z_{1-\alpha/2-\epsilon/2} \right)\!,\end{align*} $$

which implies that $P_{\rho }\left ( \beta (\rho ,h) \in C_n^*(h,1-\alpha ) \right ) \ge 1-\alpha -2\epsilon - \epsilon $ . We conclude that for any $n \ge N_1$ , $\rho \in [-1,1]$ and any $h \le h_n \le n $ and $h_n = o\left (n\right )$ , we have

$$ \begin{align*}| P_{\rho}\left( \beta(\rho,h) \in C_n^*(h,1-\alpha) \right) - (1-\alpha)| \le 3\epsilon,\end{align*} $$

which completes the proof of Theorem 4.2.

A.3 Proof of Theorem 5.1

We first show that $J_n(x,h,P,\rho )$ admits a valid Edgeworth expansion, that is,

(A.4) $$ \begin{align} \sup_{x\in \textbf{R}} \left|J_n(x,h,P,\rho) - \left( \Phi(x) + \sum_{j=1}^2 n^{-j/2}q_j(x,h,P,\rho) \phi(x) \right) \right| = O\left(n^{-1-\epsilon} \right)~ \end{align} $$

for some $\epsilon \in (0,1/2)$ , where $q_j(x,h,P,\rho )$ are polynomials on x with coefficients that are continuous functions of the moments of P (up to order 12) and $\rho $ . Furthermore, we have $q_1(x,h,P,\rho ) = q_1(-x,h,P,\rho )$ and $q_2(x,h,P,\rho ) = -q_2(-x,h,P,\rho )$ .

To show (A.4), we first use Lemma B.4 to approximate $J_n(x,h,P,\rho )$ by $\tilde {J}_n(x,h,P,\rho )$ ,

$$ \begin{align*}\sup_{x\in \textbf{R}} |J_n(x,h,P,\rho) - \tilde{J}_n(x,h,P,\rho)| = D_n + O\left(n^{-1-\epsilon} \right)\!,\end{align*} $$

for some $\epsilon \in (0,1/2)$ , where

$$ \begin{align*}D_n = \sup_{x \in \mathbf{R}} \left| \tilde{J}_n(x+n^{-1-\epsilon},h,P,\rho) - \tilde{J}_n(x-n^{-1-\epsilon},h,P,\rho) \right|\!.\end{align*} $$

Due to Theorem B.2, we can conclude $D_n = O\left (n^{-1-\epsilon } \right )$ . We then use Theorem B.2 to approximate $\tilde {J}_n(x,h,P,\rho )$ by a valid Edgeworth expansion,

$$ \begin{align*} \sup_{x \in \textbf{R}} \left| \tilde{J}_n(x,h,P,\rho) - \left( \Phi(x) + \sum_{j=1}^2 n^{-j/2}q_j(x,h,P,\rho) \phi(x) \right) \right| = O\left(n^{-3/2} \right)\!. \end{align*} $$

Note that we can use Theorem B.2 since Assumption 5.1 implies Assumption B.2 and the distribution $\tilde {J}_n(x,h,P,\rho )$ that we obtain from Lemma B.4 satisfies the required conditions. We conclude (A.4) by triangular inequality. The polynomials $q_j$ that appear in (A.4) are the polynomials in the Edgeworth expansion of $\tilde {J}_n(x,h,P,\rho )$ .

Now, we show that $P_{\rho } \left ( |R_n(h)| \le x \right )$ also admits an asymptotic approximation, that is,

(A.5) $$ \begin{align} \sup_{x\in \textbf{R}} \left| P_{\rho} \left( |R_n(h)| \le x \right) - \left( 2\Phi(x) - 1 + 2n^{-1} q_2(x,h,P,\rho)\phi(x) \right)\right| = O\left( n^{-1-\epsilon}\right)\!, \end{align} $$

where $q_2(x,h,P,\rho )$ and $\epsilon \in (0,1/2)$ are defined in (A.4). Note that (24) follows from (A.5) since we can write (24) as follows:

$$ \begin{align*}\left| P_{\rho} \left( |R_n(h)| \le z_{1-\alpha/2} \right) - (1-\alpha) \right| = O \left( n^{-1} \right)\!,\end{align*} $$

and the previous expression is what we obtain taking $x=z_{1-\alpha /2}$ in (A.5), where we used that $1-\alpha = 2\Phi (z_{1-\alpha /2}) - 1$ holds by definition of $z_{1-\alpha /2}$ .

To show (A.5), we first write

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le x \right) = J_n(x,h,P,\rho) -J_n(-x,h,P,\rho) + r_n(x),\end{align*} $$

where $r_n(x) = P_{\rho } \left ( R_n(h) = -x \right )$ . We then use (A.4) to approximate $J_n(\cdot ,h,P,\rho ) $ and the properties of the polynomials $q_j(\cdot ,h,P,\rho )$ to obtain the following approximation:

$$ \begin{align*} \sup_{x\in \textbf{R}} \left| P_{\rho} \left( |R_n(h)| \le x \right) - \left( 2\Phi(x) - 1 + 2n^{-1} q_2(x,h,P,\rho)\phi(x) + r_n(x) \right) \right| = O\left( n^{-1-\epsilon}\right)\!. \end{align*} $$

Finally, $\sup _{x\in \textbf {R}} r_n(x) = O\left (n^{-1-\epsilon }\right )$ since $ r_n(x) \le P_{\rho } \left ( R_n(h) \in (-x - n^{-1-\epsilon },-x] \right ) $ and (A.4) holds. We use this in the previous expression to complete the proof of (A.5).

A.4 Proof of Theorem 5.2

The proof has two parts. In the first part, we assume that $P(|\Delta _n|> C_1 n^{-1-\epsilon }) \le C_2 n^{-1-\epsilon }$ for some constants $C_1$ and $C_2$ , where $\Delta _n = c_n^*(h,1-\alpha ) - c_n(h,1-\alpha )$ . We use this assumption to prove the theorem with an error of size $O(n^{-(1+\epsilon )})$ for any $\epsilon \in (0,1/2)$ , which is sufficient to conclude. In the second part, we prove the assumption of the first part.

Part 1: By (11), we have $P_{\rho } ( \beta (\rho ,h) \in C_n^*(h,1-\alpha ) ) = P_{\rho } ( |R_n(h)| \le c_n^*(h,1-\alpha ) ) $ . We can write this term as the sum of $ P_{\rho } ( |R_n(h)| \le c_n(h,1-\alpha ) + \Delta _n, \left | \Delta _n \right | \le C_1 n^{-1-\epsilon } )$ and $P_{\rho } ( |R_n(h)| \le c_n(h,1-\alpha ) + \Delta _n, \left | \Delta _n \right |> C_1 n^{-1-\epsilon } ) $ . We conclude $ P_{\rho } ( \beta (\rho ,h) \in C_n^*(h,1-\alpha ) )$ is equal to

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le c_n(h,1-\alpha) + \Delta_n, \left| \Delta_n \right| \le C_1 n^{-1-\epsilon} \right) +O\left(n^{-1-\epsilon} \right)\!.\end{align*} $$

By (A.5) in the proof of Theorem 5.1, we have

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le x + z n^{-1-\epsilon} \right) = P_{\rho} \left( |R_n(h)| \le x \right) + O\left(n^{-1-\epsilon} \right)\end{align*} $$

for $z = -C_1,C_1$ and any $x \in \mathbf {R}$ . Since

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le x + \Delta_n, \left| \Delta_n \right| \le C_1 n^{-1-\epsilon} \right) \le P_{\rho} \left( |R_n(h)| \le x + C_1 n^{-1-\epsilon} \right)\end{align*} $$

and

$$ \begin{align*}P_{\rho} \left( |R_n(h)| \le x + \Delta_n, \left| \Delta_n \right| \le C_1 n^{-1-\epsilon} \right) \ge P_{\rho} \left( |R_n(h)| \le x - C_1 n^{-1-\epsilon} \right) + O\left(n^{-1-\epsilon} \right)\!,\end{align*} $$

we conclude $P_{\rho } ( |R_n(h)| \le x + \Delta _n, \left | \Delta _n \right | \le n^{-1-\epsilon } ) = P_{\rho } \left ( |R_n(h)| \le x \right ) + O (n^{-1-\epsilon } )$ . Taking $x = c_n(h,1-\alpha ) $ and using that $P_{\rho } \left ( |R_n(h)| \le c_n(h,1-\alpha ) \right ) = 1-\alpha $ (due to part 2 in Assumption 5.1), we conclude $ P_{\rho } \left ( \beta (\rho ,h) \in C_n^*(h,1-\alpha ) \right ) = 1-\alpha + O (n^{-1-\epsilon } )$ .

Part 2: Fix $\epsilon \in (0,1/2)$ . Define $E_{n,1} = \{ |\hat {\rho }_n| \le 1- a/2\}$ , $E_{n,2} = \{ n^{-1} \sum _{t=1}^n \tilde {u}_t^2 \ge \tilde {C}_\sigma \}$ , $E_{n,3} = \{ n^{-1} \sum _{t=1}^n \tilde {u}_t^{4k} \le M \}$ , and $E_{n,4} = \{ \max _{ 1 \le r \le 12 } |n^{-1} \sum _{t=1}^n \tilde {u}_t^r - E[u_t^r]| \le n^{-\epsilon }\}$ , where $\tilde {C}_\sigma $ and M are as in Lemma B.5. Define $E_{n} = E_{n,1} \cap E_{n,2} \cap E_{n,3} \cap E_{n,4}$ . By Lemma B.5 and Assumption 5.1, it follows that $P(E_n^c) \le C_2 n^{-1-\epsilon }$ for some constant $C_2=C_2(a,h,k,C_\sigma , \epsilon ,c_u)$ . Note that conditional on the event $E_n$ , we can use Lemma B.4 for the distribution of the bootstrap root $R_n^*(h)$ . That is,

$$ \begin{align*}\sup_{x \in \mathbf{R}} |J_n(x,h,\hat{P}_n,\hat{\rho}_n) - \tilde{J}_n(x,h,\hat{P}_n,\hat{\rho}_n)| \le D_n + n^{-1-\epsilon} C\left( n^{-1} \sum_{t=1}^{n} |\tilde{u}_t|^{k} + \tilde{u}_t^{2k} + \tilde{u}_t^{4k} \right)\!,\end{align*} $$

for some constant $C $ , where

$$ \begin{align*}D_n = \sup_{x \in \mathbf{R}} \left| \tilde{J}_n(x+n^{-1-\epsilon},h,\hat{P}_n,\hat{\rho}_n) - \tilde{J}_n(x-n^{-1-\epsilon},h,\hat{P}_n,\hat{\rho}_n) \right|\!.\end{align*} $$

By Theorem B.3, there is an Edgeworth expansion for $\tilde {J}_n(x,h,\hat {P}_n,\hat {\rho }_n)$ conditional on $E_n$ . This implies $D_n \le C n^{-1-\epsilon }$ conditional on $E_n$ , for some constant $C $ . Similarly, conditional on $E_n$ , $ n^{-1} \sum _{t=1}^{n} \left ( |\tilde {u}_i|^{k} + \tilde {u}_t^{2k} + \tilde {u}_t^{4k} \right ) \le C$ , for some constant C that depends on M. We conclude that, conditional on $E_n$ , $J_n(x,h,\hat {P}_n,\hat {\rho }_n)$ has the following Edgeworth expansion:

$$ \begin{align*}\sup_{x \in \mathbf{R}} \left|J_n(x,h,\hat{P}_n,\hat{\rho}_n) - \left( \Phi(x) + \sum_{j=1}^2 n^{-j/2}q_j(x,h,\hat{P}_n,\hat{\rho}_n) \phi(x) \right) \right| \le C n^{-1-\epsilon}\!.\end{align*} $$

The properties of $q_j(x,h,\hat {P}_n,\hat {\rho }_n)$ from Theorem B.3 and arguments from the proof of Theorem 5.1 imply

$$ \begin{align*}\sup_{x \in \mathbf{R}} \left| P_{\rho} \left( |R_n^*(h)| \le x \mid Y^{(n)} \right) - \left( 2\Phi(x) - 1 + 2n^{-1} q_2(x,h,\hat{P}_n,\hat{\rho}_n)\phi(x) \right)\right| \le C n^{-1-\epsilon}. \end{align*} $$

Recall that the coefficients of $q_2(x,h,\hat {P}_n,\hat {\rho }_n)$ are polynomial of the moments of $\hat {P}_n$ (up to order 12) and $\hat {\rho }_n$ . Conditional on $E_n$ , we know the moments of $\hat {P}_n$ are close to the moments of P: $ |n^{-1} \sum _{t=1}^n \tilde {u}_t^r - E[u_t^r]| \le n^{-\epsilon }$ for $r=1,\dots ,12$ . Therefore, conditional on $E_n$ , we have

$$ \begin{align*}\sup_{x \in \mathbf{R}} \left| P_{\rho} \left( |R_n^*(h)| \le x \mid Y^{(n)} \right) - \left( 2\Phi(x) - 1 + 2n^{-1} q_2(x,h,P,\rho)\phi(x) \right)\right| \le C n^{-1-\epsilon},\end{align*} $$

for some constant C. By (A.5) in the proof of Theorem 5.1, the previous inequality, and the definition of $c_n^*(h,1-\alpha )$ and $c_n(h,1-\alpha )$ as quantiles, we conclude that

$$ \begin{align*}|c_n^*(h,1-\alpha) - c_n(h,1-\alpha)| \le C_1 n^{-1-\epsilon}\end{align*} $$

for some constant $C_1$ . This completes the proof of our assumption in part 1.

B AUXILIARY RESULTS

B.1 Lemmas

Lemma B.1. Suppose Assumptions 4.1 and 4.2 hold. Then, for any fixed $\eta>0$ , there exist constants $M>0$ , $\tilde {K}_4>0$ , and $N_0 = N_0(\eta )$ such that:

  1. 1. $ P_{\rho } \left ( g(\rho ,n) ~n^{1/2}~ | \hat {\rho }_n - \rho |> M \right ) < \eta ,$

  2. 2. $ P_{\rho } \left ( \left | n^{-1} \sum _{t=1}^n \tilde {u}_t^2 - \sigma ^2 \right |> \sigma ^2/2 \right ) < \eta ,$

  3. 3. $ P_{\rho } \left ( n^{-1} \sum _{t=1}^n \tilde {u}_t^4> \tilde {K}_4 \right ) < \eta ,$

for $n \ge N_0$ and $\rho \in [-1,1]$ , where $g(\rho ,k) = \left (\sum _{\ell =0}^{k-1} \rho ^{2 \ell }\right )^{1/2}$ , $\hat {\rho }_n$ is as in (12), and $\{ \tilde {u}_t: 1 \le t \le n\}$ are centered residuals as in (13).

Proof. See Section C.1 of the Supplementary Material.

Lemma B.2. For any fixed $M>0$ . Suppose that for any $\rho \in [-1,1],$ we have

$$ \begin{align*} |\hat{\rho}_n - \rho| \le \frac{M}{n^{1/2}g(\rho,n)}, \end{align*} $$

where $g(\rho ,k) = \left (\sum _{\ell =0}^{k-1} ~ \rho ^{2 \ell }\right )^{1/2}$ . Then, there exist constants $\tilde {M} = \tilde {M}(M)>0$ and ${N_0=N_0(M)>0}$ such that $ |\hat {\rho }_n| \le 1 + \tilde {M}/n~ $ for all $n \ge N_0$ .

Proof. See Section C.2 of the Supplementary Material.

Lemma B.3. Suppose Assumptions 4.1 and 4.2 hold. Fix $\epsilon>0$ . Then, for any $\alpha \in (0,1)$ and for any sequence $h_n \le n$ such that $h_n = o\left (n\right )$ , we have

  1. 1. $\lim _{n \to \infty } ~ \sup _{h \le h_n} ~ \sup _{\rho \in [-1,1]} P_{\rho } \left ( z_{1-\alpha /2-3\epsilon /2} \le c_n^*(h,1-\alpha ) \le z_{1-\alpha /2+3\epsilon /2} \right ) = 1,$

  2. 2. $\lim _{n \to \infty } ~ \sup _{h \le h_n} ~ \sup _{\rho \in [-1,1]} P_{\rho } \left ( z_{\alpha _0-\epsilon /2} \le q_n^*(h,\alpha _0) \le z_{\alpha _0+\epsilon /2} \right ) = 1,$

where $z_{\alpha _0}$ is the $\alpha _0$ -quantiles of the standard normal distribution, $c_n^*(h,1-\alpha ) $ is as in (15), and $q_n^*(h,\alpha _0)$ is the $\alpha _0$ -quantile of $R_{b,n}^*(h)$ defined in (14).

Proof. See Section C.3 of the Supplementary Material.

Lemma B.4. Suppose Assumption 5.1 holds. For any fixed $h \in \mathbf {N}$ and $a \in (0,1)$ . Then, for any $\rho \in [-1+a,1-a]$ and $\epsilon \in (0,1/2)$ , there exist constant $C = C(a,h,k,C_\sigma )>0$ and a real-valued function

$$ \begin{align*}\mathcal{T}(\cdot;\sigma^2,\psi_4^4,\rho) : \mathbf{R}^8 \to \mathbf{R},\end{align*} $$

such that:

  1. 1. $\mathcal {T}(\mathbf {0};\sigma ^2,\psi _4^4,\rho ) = 0$ ,

  2. 2. $\mathcal {T}(x;\sigma ^2,\psi _4^4,\rho ) $ is a polynomial of degree 3 in $x \in \mathbf {R}^8$ with coefficients that are continuously differentiable functions of $\sigma ^2$ , $\psi _4^4$ , and $\rho $ ,

  3. 3. $ \sup _{x \in \mathbf {R}} |J_T(x,h,P,\rho ) - \tilde {J}_n(x,h,P,\rho )| \le D_n + n^{-1-\epsilon } C\left (E[|u_t|^{k}] + E[u_t^{2k}] + E[u_t^{4k}] \right ),$

where $\sigma ^2 = E_P[u_1^2]$ , $\psi _4^4 = E_P[u_1^4]$ , $k \ge 8(1+\epsilon )/(1-2\epsilon )$ ,

$$ \begin{align*}\tilde{J}_n(x,h,P,\rho) \equiv P_{\rho} \left( (n-h)^{1/2} \mathcal{T} \left( \frac{1}{n-h} \sum_{t=1}^{n-h} X_t ;\sigma^2,\psi_4^4,\rho \right) \le x \right)\!,\end{align*} $$

and

$$ \begin{align*}D_n = \sup_{x \in \mathbf{R}} \left| \tilde{J}_n(x+n^{-1-\epsilon},h,P,\rho) - \tilde{J}_n(x-n^{-1-\epsilon},h,P,\rho) \right|\!.\end{align*} $$

The sequence $\{ X_t : 1 \le t \le n-h \}$ is defined in (B.4). Furthermore, the asymptotic variance of $(n-h)^{1/2} \mathcal {T}( (n-h)^{-1} \sum _{t=1}^{n-h} X_t ;\sigma ^2,\psi _4^4,\rho )$ is equal to one.

Proof. See Section D.1 of the Supplementary Material.

Lemma B.5. Suppose Assumption 5.1 holds. For any fixed $h \in \mathbf {N}$ and $a \in (0,1)$ . Then, for any $|\rho |\le 1-a$ and $\epsilon \in (0,1/2)$ , there exist $ {C} = C(a,k,h,C_\sigma ,\epsilon ,c_u)$ , $ \tilde {C}_{\sigma }$ , and M such that:

  1. 1. $P \left ( \left |\hat {\rho }_n \right |> 1-a/2 \right ) \le {C} n^{-1-\epsilon }, $

  2. 2. $P \left ( \left | n^{-1} \sum _{t=1}^{n} \tilde {u}_t^{r} - E[u_t^r] \right |> n^{-\epsilon } \right ) \le {C} n^{-1-\epsilon } $ ,

  3. 3. $P \left ( n^{-1} \sum _{t=1}^{n} \tilde {u}_t^2 < \tilde {C}_{\sigma } \right ) \le {C} n^{-1-\epsilon } $ ,

  4. 4. $P \left ( n^{-1} \sum _{t=1}^{n} \tilde {u}_t^{4k}> M \right ) \le {C} n^{-1-\epsilon }$ ,

for fixed $r \ge 1$ , $k \ge 8(1+\epsilon )/(1-2\epsilon )$ , where $\hat {\rho }_n$ and the centered residuals $\{ \tilde {u}_t: 1 \le t \le n\}$ are defined in (12) and (13), respectively.

Proof. See Section D.2 of the Supplementary Material.

B.2 Uniform Consistency

For any fixed $M>0$ , consider the sequence of models:

$$ \begin{align*}y_{n,t} = \rho_n y_{n,t-1} + u_{n,t},~ y_{n,0} = 0,\quad \text{and} \quad \rho_n \in [-1-M/n,1+M/n],\end{align*} $$

where $\{u_{n,t} : 1 \le t \le n\}$ is a sequence of shocks with probability distribution denoted by $P_n$ . We use $P_n$ and $E_n$ to compute, respectively, probabilities and expected values of the sequence $\{(y_{n,t},u_{n,t}) : 1 \le t \le n \}$ . This appendix presents results for a sequence of AR(1) models.

We extend the notation introduced in Section 2 for the sequence of models. For fixed any $h<n$ , the coefficients in the linear regression of $y_{n,t+h}$ on $(y_{n,t},y_{n,t-1})$ are defined by

(B.1) $$ \begin{align} \begin{pmatrix} \hat{\beta}_n(h) \\ \hat{\gamma}_n(h) \end{pmatrix} = \left( \sum_{t=1}^{n-h} x_{n,t} x_{n,t}' \right)^{-1} \left( \sum_{t=1}^{n-h} x_{n,t} y_{n,t+h} \right)\!, \end{align} $$

where $x_{n,t} \equiv (y_{n,t}, y_{n,t-1})'$ . And the HC standard error $\hat {s}_n(h)$ is defined by

$$ \begin{align*} \hat{s}_n(h) \equiv \left( \sum_{t=1}^{n-h} \hat{u}_{n,t}(h)^2 \right)^{-1/2} \left(\sum_{t=1}^{n-h} \hat{\xi}_{n,t}(h)^2 \hat{u}_{n,t}(h)^2 \right)^{1/2} \left( \sum_{t=1}^{n-h} \hat{u}_{n,t}(h)^2 \right)^{-1/2}, \end{align*} $$

where $\hat {\xi }_{n,t}(h) = y_{n,t+h} - \hat {\beta }_n(h) y_{n,t} - \hat {\gamma }_n(h) y_{n,t-1}$ , $\hat {u}_{n,t}(h) = y_{n,t} - \hat {\rho }_n(h) y_{n,t-1}$ , and $\hat {\rho }_n(h)$ is defined as

(B.2) $$ \begin{align} \hat{\rho}_n(h) \equiv \left( \sum_{t=1}^{n-h} y_{n,t-1}^2 \right)^{-1} \left( \sum_{t=1}^{n-h} y_{n,t} y_{n,t-1} \right)\!. \end{align} $$

For any fixed positive constants $K_4>0$ and $ \overline {\sigma } \ge \underline {\sigma }>0$ , we consider the next assumption that imposes restrictions on the distribution of the shocks $P_n$ .

Assumption B.1.

  1. i) $\{u_{n,t}: 1 \le t \le n \}$ are i.i.d. random variables with mean zero and variance $\sigma _n^2$ .

  2. ii) ${E}_n[u_{n,t}^4]< K_4$ and $\sigma _n^2 \in [\underline {\sigma },\:\overline {\sigma }]$ .

We denote by $\mathbf {P}_{n,0} $ the set of all distributions $P_n$ that verify Assumption B.1. Theorem B.1 below shows that the results presented in Xu (Reference Xu2023) and Montiel Olea and Plagborg-Møller (Reference Montiel Olea and Plagborg-Møller2021) also hold for sequences of AR(1) models with i.i.d. shocks. We adapt their proof and simplify some steps based on our stronger assumptions over the serial dependence of the shocks. For instance, we assume only bounded 4th moments, while they assume bounded at least 8th bounded moments. One remarkable difference is that we do not need to assume a high-level assumption such as Assumption 4.2 since this can be verified using Assumption B.1; we present the claim of this result in the next proposition.

Proposition B.1. Suppose Assumption B.1 holds. Then, we have

$$ \begin{align*} \lim_{K \to \infty} ~ \lim_{n \to \infty} ~ \inf_{P_n \in \mathbf{P}_{n,0}} ~ \inf_{|\rho_n| \le 1+M/n} ~ P_n \left(~ g(\rho,n)^{-2} ~ n^{-1} \sum_{t=1}^{n}y_{n,t-1}^2 \ge 1/K ~\right) = 1, \end{align*} $$

where $g(\rho ,k) = \left (\sum _{\ell =0}^{k-1} ~ \rho ^{2 \ell }\right )^{1/2}$ .

Proof. See Section C.4 of the Supplementary Material.

Theorem B.1. Suppose Assumption B.1 holds. Then, for any sequence $h_n \le n$ such that $h_n = o\left (n\right )$ , we have

$$ \begin{align*} \sup_{h \le h_n} ~ \sup_{P_n \in \mathbf{P}_{n,0}} ~ \sup_{|\rho| \le 1+M/n} ~ \sup_{x \in \mathbf{R}} ~ \left| J_n(x,h,P_n,\rho) - \Phi(x) \right| \to 0, \quad \text{as} \quad n \to \infty, \end{align*} $$

where $J_n(\cdot ,h,P_n,\rho )$ is as in (7) and $\Phi (x)$ is the cdf of the standard normal distribution.

Proof. See Section C.5 of the Supplementary Material.

Proposition B.2. Suppose Assumption B.1 holds. In addition, assume $\rho _n = 1 - c_1/n$ and $h_n$ is such that $h_n \le n$ and $h_n/\sqrt {n} \to c_2 $ as $n \to \infty ,$ where $c_1,c_2>0$ . Then,

$$ \begin{align*}\liminf_{n \to \infty} P_n \left( [1/L,L] \subseteq C_{la-ar}^*(h_n,1-\alpha) \right) \ge 1-\alpha\end{align*} $$

for any $L> 1 $ , where $C_{la-ar}^*(h,1-\alpha )$ is defined in Remark 4.5, and presented below:

$$ \begin{align*}C_{la-ar}^*(h,1-\alpha) = \left[ (\hat{\beta}_n(1) - \hat{s}_n(1) c_n^*(1, 1-\alpha))^{h},~(\hat{\beta}_n(1) + \hat{s}_n(1) c_n^*(1, 1-\alpha))^{h} \right]\!.\end{align*} $$

Proof. See Section C.6 of the Supplementary Material.

B.3 Asymptotic Refinements

Consider the sequence $\{z_t: 1 \le t \le n\}$ defined as

$$ \begin{align*}z_t = \rho z_{t-1} + u_t, \quad \text{and} \quad z_0 = \sum_{\ell = 0}^{\infty} \rho^{\ell} u_{-\ell},\end{align*} $$

where $\{u_{-\ell } : \ell \ge 0 \}$ is an i.i.d. sequence with the same distribution as $u_1$ . This appendix presents asymptotic expansion results for distributions of real-valued functions based on sample averages of the sequence $\{X_t = F(z_{t-1},z_t,z_{t+h}) : 1 \le t \le n-h \}$ , where F is a function that we define below. Our approach in this section relies on the framework and results presented in Götze and Hipp (Reference Götze and Hipp1994) and Bhattacharya and Ghosh (Reference Bhattacharya and Ghosh1978).

Let $F(\cdot ~; \sigma ^2, V, \rho ): \mathbf {R}^3 \to \mathbf {R}^8$ be a function defined at $(x,y,z)$ equal to

(B.3) $$ \begin{align} \Big( &(z-\rho^h y) (y-\rho x),~ (y-\rho x)^2-\sigma^2, ~((z-\rho^h y) (y-\rho x))^2-V,~(z-\rho^h y) (y-\rho x)^3,~ \notag \\ &(y-\rho x) x,~ (z-\rho^h y) x,~ (z-\rho^h y) ^2 (y-\rho x) x,~ (z-\rho^h y) (y-\rho x)^2 x \Big), \end{align} $$

where $\sigma ^2 = \sigma ^2(P) = E_P[u_1^2]$ , $V = V(\rho , h, P) = E_P[\xi _1^2 u_1^2]$ , $\xi _1 = \xi _1(\rho ,h) \equiv \sum _{\ell =1}^h \rho ^{h-\ell } u_{1+\ell }$ , and P is the distribution of the shocks that verified Assumption B.2 that we define below. Using that $u_t = z_t - \rho z_{t-1}$ , $\xi _t = z_{t+h} - \rho ^h z_t$ , and the definition of F in (B.3), we can write the sequence of random vectors $\{X_t = F(z_{t-1},z_t,z_{t+h}; \sigma ^2, V, \rho )): 1 \le t \le n-h \}$ as follows:

(B.4) $$ \begin{align} X_t = (\xi_t u_t, u_t^2-\sigma^2,(\xi_tu_t)^2-V,\xi_t u_t^3, u_t z_{t-1}, \xi_t z_{t-1}, \xi_t^2 u_t z_{t-1}, \xi_t u_t^2 z_{t-1}). \end{align} $$

We assume in this section that $h \in \mathbf {N}$ is fixed and $|\rho |<1$ . Moreover, for any fixed positive constants $C_{18}>0$ and $C_{\sigma }>0$ , we consider the next assumption that imposes restrictions on the distribution of the shocks P.

Assumption B.2.

  1. i) $\{u_t: 1 \le t \le n\}$ is independent and identically distributed with $E[u_t] =0$ .

  2. ii) $u_t$ has a positive continuous density.

  3. iii) $E[u_t^{18}] \le C_{18} < \infty $ and $E[u_t^2] \ge C_{\sigma } $ .

Assumption B.2 implies that the sequence $\{z_t: 1 \le t \le n\}$ is strictly stationary. By construction, $E[X_t] = \mathbf {0} \in \mathbf {R}^8$ . Define

(B.5) $$ \begin{align} \Sigma = \lim_{n \to \infty} Cov\left( (n-h)^{-1/2} \sum_{t=1}^{n-h} X_t \right)\!. \end{align} $$

The asymptotic covariate matrix $\Sigma $ is non-singular due to Lemma 2.1 in Götze and Hipp (Reference Götze and Hipp1994), Assumption B.2, and how we defined the sequence $\{X_t: 1 \le t \le n-h\}$ . Let $\mathcal {T}: \mathbf {R}^8 \to \mathbf {R}$ be a polynomial with coefficients depending on $\rho $ , $E_P[u_1^2]$ , and $E_P[u_1^4]$ such that $\mathcal {T}(\mathbf {0}) = 0$ . Define

(B.6) $$ \begin{align} \tilde{J}_n(x,h,P,\rho) \equiv P_{\rho} \left( \frac{(n-h)^{1/2}}{ \tilde{\sigma} } \mathcal{T} \left( \frac{1}{n-h} \sum_{t=1}^{n-h} X_t \right) \le x \right)\!, \end{align} $$

where $\tilde {\sigma }^2$ is the asymptotic variance of $(n-h)^{1/2} \mathcal {T}( (n-h)^{-1} \sum _{t=1}^{n-h} X_t )$ . The next theorem shows that the distribution $ \tilde {J}_n(\cdot ,h,P,\rho ) $ admits a valid Edgeworth expansion.

Theorem B.2. Suppose Assumption B.2 holds. Fix a given $h \in \mathbf {N}$ and $a \in (0,1)$ . Then, for any $\rho \in [-1+a, 1-a]$ , we have

$$ \begin{align*}\sup_{x \in \mathbf{R}} \left | \tilde{J}_n(x,h,P,\rho) - \left( \Phi(x) + \sum_{j=1}^2 n^{-j/2}q_j(x,h,P,\rho) \phi(x) \right) \right| = O \left( n^{-3/2} \right)\!,\end{align*} $$

where $\tilde {J}_n(x,h,P,\rho )$ is as in (B.6), $\Phi (x)$ and $\phi (x)$ are the cdf and pdf of the standard normal distribution, and $q_1(x,h,P,\rho )$ and $q_2(x,h,P,\rho )$ are polynomials on x with coefficients that are continuous function of moments of P (up to order 12) and $\rho $ . Furthermore, we have $q_1(x,h,P,\rho ) = q_1(-x,h,P,\rho )$ and $q_2(x,h,P,\rho ) = -q_2(-x,h,P,\rho )$ .

The proof of Theorem B.2 is presented in Section D.3 of the Supplementary Material. It relies on Götze and Hipp (Reference Götze and Hipp1983, Reference Götze and Hipp1994) to guarantee the existence of Edgeworth expansion for sample averages and in the results of Bhattacharya and Ghosh (Reference Bhattacharya and Ghosh1978) to complete the proof.

For the empirical distribution $\hat {P}_n$ defined in (13) and the estimator $\hat {\rho }_n$ defined in (12), we consider the bootstrap sequence $\{z_{b,t}^* : 1 \le t \le n \}$ defined as

$$ \begin{align*}z_{b,t}^* = \hat{\rho}_n z_{b,t-1}^* + u_{b,t}^*, \quad \text{and} \quad z_{b,0}^* = \sum_{\ell = 0}^{\infty} \hat{\rho}_n^{\ell} u_{b,-\ell}^*,\end{align*} $$

where $\{u_{b,j}^* : j \le n \}$ is an i.i.d. sequence draw from the distribution $\hat {P}_n$ . Define the sequence of random vectors $\{X_{b,t}^* = F(z_{b,t-1}^*,z_{b,t}^*,z_{b,t+h}^*; \hat {\sigma }_n^2, \hat {V}_n, \hat {\rho }_n) : 1 \le t \le n-h \} $ , where $F(\cdot )$ is as in (B.3) and $\hat {\sigma }_n^2, \hat {V}_n, \hat {\rho }_n$ are defined using $\hat {P}_n$ and $ \hat {\rho }_n$ .

Theorem B.3. Suppose Assumption 5.1 holds. Fix a given $h \in \mathbf {N}$ and $a \in (0,1)$ . Then, for any $\rho \in [-1+a, 1-a]$ and $\epsilon \in (0,1/2)$ , there exist constants $C_1$ and $C_2$ such that

$$ \begin{align*}P \left( \sup_{x \in \mathbf{R}} \left | \tilde{J}_n(x,h,\hat{P}_n,\hat{\rho}_n) - \left( \Phi(x) + \sum_{j=1}^2 n^{-j/2}q_j(x,h,\hat{P}_n,\hat{\rho}_n) \phi(x) \right) \right|> C_1 n^{-3/2} \right)\!,\end{align*} $$

is lower than $C_2 n^{-1-\epsilon }$ , where $\tilde {J}_n(x,h,\cdot ,\cdot )$ is as in (B.6) and $X_{b,t}^*$ is replacing $X_{i}$ , $\Phi (x)$ and $\phi (x)$ are the cdf and pdf of the standard normal distribution, and $q_1(x,h,\hat {P}_n,\hat {\rho }_n)$ and $q_2(x,h,\hat {P}_n,\hat {\rho }_n)$ are polynomials on x with coefficients that are continuous function of moments of $\hat {P}_n$ (up to order 12) and $\hat {\rho }_n$ . Furthermore, we have $q_1(x,h,\hat {P}_n,\hat {\rho }_n) = q_1(-x,h,\hat {P}_n,\hat {\rho }_n)$ and $q_2(x,h,\hat {P}_n,\hat {\rho }_n) = -q_2(-x,h,\hat {P}_n,\hat {\rho }_n)$ .

The proof of Theorem B.3 is presented in Section D.4 of the Supplementary Material. It relies on Götze and Hipp (Reference Götze and Hipp1983, Reference Götze and Hipp1994), Bhattacharya and Ghosh (Reference Bhattacharya and Ghosh1978), and Lemma B.5.

SUPPLEMENTARY MATERIAL

Velez, A. (2025): Supplement to “The Local Projection Residual Bootstrap for AR(1) Models,” Econometric Theory Supplementary Material. To view, please visit: https://doi.org/10.1017/S0266466625100248.

Footnotes

I am deeply grateful to Ivan Canay, Federico Bugni, and Joel Horowitz for their guidance and support and for the extensive discussions that have helped shape the article. I thank the editor, co-editor, and two anonymous referees for their comments and suggestions that have significantly helped to improve this article. I am also thankful to Federico Crippa, Bruno Fava, Danil Fedchenko, Diego Huerta, Eleftheria Kelekidou, Pepe Montiel-Olea, Filip Obradovic, Mikkel Plagborg-Møller, Sebastian Sardon, and Ke-Li Xu for valuable comments and suggestions. Financial support from the Robert Eisner Memorial Fellowship and the Dissertation Year Fellowship is gratefully acknowledged.

1 Section 7 presents the LP-residual bootstrap for VAR(p) models, but its theoretical properties are unknown and left for future research (see Remarks 4.4 and 5.5 for further discussion). Appendix E.1 of the Supplementary Material reports a Monte Carlo simulation for the LP-residual bootstrap for VAR models.

References

Andrews, D. W. (1993). Exactly median-unbiased estimation of first order autoregressive/unit root models. Econometrica, 61, 139165.10.2307/2951781CrossRefGoogle Scholar
Andrews, D. W. (2002). Higher-order improvements of a computationally attractive k-step bootstrap for extremum estimators. Econometrica, 70, 119162.10.1111/1468-0262.00271CrossRefGoogle Scholar
Andrews, D. W. (2004). The block–block bootstrap: Improved asymptotic refinements. Econometrica, 72, 673700.10.1111/j.1468-0262.2004.00509.xCrossRefGoogle Scholar
Bhattacharya, R. (1987). Some aspects of Edgeworth expansions in statistics and probability. New Perspectives in Theoretical and Applied Statistics, 157, 171.Google Scholar
Bhattacharya, R. N., & Ghosh, J. K. (1978). On the validity of the formal Edgeworth expansion. Annals of Statistics, 6, 434451.10.1214/aos/1176344134CrossRefGoogle Scholar
Bose, A. (1988). Edgeworth correction by bootstrap in autoregressions. Annals of Statistics, 16, 17091722.10.1214/aos/1176351063CrossRefGoogle Scholar
Carlstein, E. (1986). The use of subseries values for estimating the variance of a general statistic from a stationary sequence. Annals of Statistics, 14, 11711179.10.1214/aos/1176350057CrossRefGoogle Scholar
Gonçalves, S., & Kilian, L. (2004). Bootstrapping autoregressions with conditional heteroskedasticity of unknown form. Journal of Econometrics, 123, 89120.10.1016/j.jeconom.2003.10.030CrossRefGoogle Scholar
Gospodinov, N. (2004). Asymptotic confidence intervals for impulse responses of near-integrated processes. The Econometrics Journal, 7, 505527.10.1111/j.1368-423X.2004.00141.xCrossRefGoogle Scholar
Götze, F., & Hipp, C. (1983). Asymptotic expansions for sums of weakly dependent random vectors. Z. Wahrscheinlichkeitstheorie verw Gebiete, 64, 211239.10.1007/BF01844607CrossRefGoogle Scholar
Götze, F., & Hipp, C. (1994). Asymptotic distribution of statistics in time series. Annals of Statistics, 22, 20622088.10.1214/aos/1176325772CrossRefGoogle Scholar
Hall, P. (1992). The bootstrap and Edgeworth expansion. (1st ed.) Springer.10.1007/978-1-4612-4384-7CrossRefGoogle Scholar
Hall, P., & Horowitz, J. L. (1996). Bootstrap critical values for tests based on generalized-method-of-moments estimators. Econometrica, 64, 891916.10.2307/2171849CrossRefGoogle Scholar
Hansen, B. E. (1999). The grid bootstrap and the autoregressive model. Review of Economics and Statistics, 81, 594607.10.1162/003465399558463CrossRefGoogle Scholar
Herbst, E. P., & Johannsen, B. K. (2024). Bias in local projections. Journal of Econometrics, 240, 105655.10.1016/j.jeconom.2024.105655CrossRefGoogle Scholar
Horowitz, J. L. (2001). The bootstrap. Handbook of Econometrics, 5, 31593228.10.1016/S1573-4412(01)05005-XCrossRefGoogle Scholar
Horowitz, J. L. (2019). Bootstrap methods in econometrics. Annual Review of Economics, 11, 193224.10.1146/annurev-economics-080218-025651CrossRefGoogle Scholar
Inoue, A., & Kilian, L. (2002). Bootstrapping autoregressive processes with possible unit roots. Econometrica, 70, 377391.10.1111/1468-0262.00281CrossRefGoogle Scholar
Inoue, A., & Kilian, L. (2020). The uniform validity of impulse response inference in autoregressions. Journal of Econometrics, 215, 450472.10.1016/j.jeconom.2019.10.001CrossRefGoogle Scholar
Inoue, A., & Shintani, M. (2006). Bootstrapping GMM estimators for time series. Journal of Econometrics, 133, 531555.10.1016/j.jeconom.2005.06.004CrossRefGoogle Scholar
Jorda, B. (2005). Estimation and inference of impulse responses by local projections. American Economic Review, 95, 161182.10.1257/0002828053828518CrossRefGoogle Scholar
Kilian, L., & Kim, Y. J. (2011). How reliable are local projection estimators of impulse responses? The Review of Economics and Statistics, 93, 14601466.10.1162/REST_a_00143CrossRefGoogle Scholar
Kunsch, H. R. (1989). The jackknife and the bootstrap for general stationary observations. Annals of Statistics, 17, 12171241.10.1214/aos/1176347265CrossRefGoogle Scholar
Lahiri, S. (2003). Resampling methods for dependent data. Springer Science & Business Media.10.1007/978-1-4757-3803-2CrossRefGoogle Scholar
Lahiri, S. N. (1996). On Edgeworth expansion and moving block bootstrap for StudentizedM-estimators in multiple linear regression models. Journal of Multivariate Analysis, 56, 4259.10.1006/jmva.1996.0003CrossRefGoogle Scholar
Lazarus, E., Lewis, D. J., Stock, J. H., & Watson, M. W. (2018). HAR inference: Recommendations for practice. Journal of Business & Economic Statistics, 36, 541559.10.1080/07350015.2018.1506926CrossRefGoogle Scholar
Lusompa, A. (2023). Local projections, autocorrelation, and efficiency. Quantitative Economics, 14, 11991220.10.3982/QE1988CrossRefGoogle Scholar
Mikusheva, A. (2007). Uniform inference in autoregressive models. Econometrica, 75, 14111452.10.1111/j.1468-0262.2007.00798.xCrossRefGoogle Scholar
Mikusheva, A. (2012). One-dimensional inference in autoregressive models with the potential presence of a unit root. Econometrica, 80, 173212.Google Scholar
Mikusheva, A. (2015). Second order expansion of the t-statistic in AR (1) models. Econometric Theory, 31, 426448.10.1017/S0266466614000383CrossRefGoogle Scholar
Montiel Olea, J. L., Plagborg-Møller, M., Qian, E., & Wolf, C. K. (2024). Double robustness of local projections and some unpleasant VARithmetic. Tech. rep., National Bureau of Economic Research.10.3386/w32495CrossRefGoogle Scholar
Montiel Olea, J. L., & Plagborg-Møller, M. (2021). Local projection inference is simpler and more robust than you think. Econometrica, 89, 17891823.10.3982/ECTA18756CrossRefGoogle Scholar
Montiel Olea, J. L., & Plagborg-Møller, M. (2022). Corrigendum: Local Projection Inference is Simpler and More Robust Than You Think. https://scholar.princeton.edu/sites/default/files/lp_inference_corrigendum.pdf Google Scholar
Nakamura, E., & Steinsson, J. (2018). Identification in macroeconomics. Journal of Economic Perspectives, 32, 5986.10.1257/jep.32.3.59CrossRefGoogle Scholar
Park, J. Y. (2003). Bootstrap unit root tests. Econometrica, 71, 18451895.10.1111/1468-0262.00471CrossRefGoogle Scholar
Park, J. Y. (2006). A bootstrap theory for weakly integrated processes. Journal of Econometrics, 133, 639672.10.1016/j.jeconom.2005.06.009CrossRefGoogle Scholar
Pesavento, E., & Rossi, B. (2006). Small-sample confidence intervals for multivariate impulse response functions at long horizons. Journal of Applied Econometrics, 21, 11351155.10.1002/jae.894CrossRefGoogle Scholar
Phillips, P. C. (1977a). Approximations to some finite sample distributions associated with a first-order stochastic difference equation. Econometrica, 45, 463485.10.2307/1911222CrossRefGoogle Scholar
Phillips, P. C. (1977b). A general theorem in the theory of asymptotic expansions as approximations to the finite sample distributions of econometric estimators. Econometrica, 45, 15171534.10.2307/1912315CrossRefGoogle Scholar
Phillips, P. C. (1998). Impulse response and forecast error variance asymptotics in nonstationary VARs. Journal of Econometrics, 83, 2156.10.1016/S0304-4076(97)00064-XCrossRefGoogle Scholar
Phillips, P. C. (2023). Estimation and inference with near unit roots. Econometric Theory, 39, 221263.10.1017/S0266466622000342CrossRefGoogle Scholar
Ramey, V. A. (2016). Macroeconomic shocks and their propagation. Handbook of Macroeconomics, 2, 71162.Google Scholar
Xu, K.-L. (2023). Local projection based inference under general conditions. Tech. rep., No. 2023-001, Center for Applied Economics and Policy Research, Department of Economics, Indiana University Bloomington.Google Scholar
Figure 0

Table 1 Coverage probability (in %) of confidence intervals for $\beta (\rho ,h)$ with a nominal level of 90% and $n = 95$

Figure 1

Table 2 Coverage probability (in %) of confidence intervals for $\beta (\rho ,h)$ with a nominal level of 90% and $n = 95$

Supplementary material: File

Velez supplementary material

Velez supplementary material
Download Velez supplementary material(File)
File 412.5 KB