Please note, due to essential maintenance online purchasing will be unavailable between 08:00 and 12:00 (BST) on 24th February 2019. We apologise for any inconvenience.
Abstract. Model diagnostics are shown to have little power unless alternative hypotheses can be narrowly defined. For example, independence of observations cannot be tested against general forms of dependence. Thus, the basic assumptions in regression models cannot be inferred from the data. Equally, the proportionality assumption in proportional-hazards models is not testable. Specification error is a primary source of uncertainty in forecasting, and this uncertainty will be difficult to resolve without external calibration. Model-based causal inference is even more problematic.
The object here is to sketch a demonstration that, unless additional regularity conditions are imposed, model diagnostics have power only against a circumscribed class of alternative hypotheses. The chapter is organized around the familiar requirements of statistical models. Theorems 1 and 2, for example, consider the hypothesis that distributions are continuous and have densities. According to the theorems, such hypotheses cannot be tested without additional structure.
Let us agree, then, that distributions are smooth. Can we test independence? Theorems 3 and 4 indicate the difficulty. Next, we grant independence and consider tests that distinguish between (i) independent and identically distributed random variables on the one hand, and (ii) independent but differently distributed variables on the other. Theorem 5 shows that, in general, power is lacking.
For ease of exposition, we present results for the unit interval; transformation to the positive half-line or the whole real line is easy.
Email your librarian or administrator to recommend adding this book to your organisation's collection.