Introduction
The bootstrap and related techniques have received widespread application in the applied statistics and econometric literature. The bootstrap has been used to estimate the distribution function, bias, and standard errors of statistics of interest. The reasons for the widespread use of the bootstrap are twofold. First, the bootstrap can sometimes provide distribution or variance estimates upon which to base asymptotic inferences in cases where the asymptotic variances are difficult or impossible to obtain in the usual way. Second, and more importantly, the inferences based on the bootstrap are perceived to be more accurate in finite samples than those based on standard asymptotic approaches.
In the estimation of the distribution function, there is sound basis for the second reason. The bootstrap distribution function of a statistic can be narrowly defined as the exact finite sample distribution function evaluated at estimates of the parameters. In models with sufficient regularity, asymptotically pivotal statistics, which have a limiting distribution that does not depend on unknown parameters, the standard bootstrap will yield an approximation that is closer to the true distribution, in terms of orders of probability, than the usual limiting distribution. By nesting the original bootstrap within another bootstrap, the approximation error for non-pivotal statistics can be similarly reduced. Likewise, nested bootstraps can conceivably be used to further reduce the approximation error in pivotal statistics.
The limitation to this otherwise rosy picture is the fact that, as narrowly defined, the bootstrap requires knowledge, up to unknown and estimable parameters, of the exact finite sample distribution of the statistic of interest.
Email your librarian or administrator to recommend adding this book to your organisation's collection.