To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we consider matrix operators that are used throughout the book and special square matrices, namely triangular matrices and band matrices, that will crop up continually in our future work. From the elements of an m x n matrix, A = (aij) and a p x q matrix, B = (bij), the Kronecker product forms an mp x nq matrix. The vec operator forms a column vector out of a given matrix by stacking its columns one underneath the other. The devec operator forms a row vector out of a given matrix by stacking its rows one alongside the other. In like manner, a generalized vec operator forms a new matrix from a given matrix by stacking a certain number of its columns under each other and a generalized devec operator forms a new matrix by stacking a certain number of rows alongside each other. It is well known that the Kronecker product is intimately connected with the vec operator, but we shall see that this connection also holds for the devec and generalized operators as well. Finally we look at special square matrices with zeros above or below the main diagonal or whose nonzero elements form a band surrounded by zeros.
An alternative title to this book could have been The Application of Classical Statistical Procedures to Econometrics or something along these lines. What it purports to do is provide the reader with mathematical tools that facilitate the application of classical statistical procedures to the complicated statistical models that we are confronted with in econometrics. It then demonstrates how these procedures can be applied to a sequence of linear econometric models, each model being more complicated statistically than the previous one. The statistical procedures I have in mind are these centered around the likelihood function: procedures that involve the score vector, the information matrix, and the Cramer–Rao lower bound, together with maximum-likelihood estimation and classical test statistics.
Until recently, such procedures were little used by econometricians. The likelihood function in most econometric models is complicated, and the first-order conditions for maximizing this function usually give rise to a system of nonlinear equations that is not easily solved. As a result, econometricians developed their own class of estimators, instrumental variable estimators, that had the same asymptotic properties as those of maximum-likelihood estimators (MLEs) but were far more tractable mathematically [see Bowden and Turkington (1990)]. Nor did econometricians make much use of the prescribed classical statistical procedures for obtaining test statistics for the hypotheses of interest in econometric models; rather, test statistics were developed on an ad hoc basis.
The linear-regression model is without doubt the best-known statistical model in both the material sciences and the social sciences. Because it is so well known, it provides us with a good starting place for the introduction of classical statistical procedures. Moreover, it furnishes an easy first application of matrix calculus that assuredly becomes more complicated in future models, and its inclusion ensures completeness in our sequence of statistical models.
The linear-regression model is modified in one way only to provide our basic model. Lagged values of the dependent variable will be allowed to appear on the right-hand side of the regression equation.
Far more worthy candidates of the mathematical tools presented in the preceding chapters are variations of the basic model that we achieve by allowing the disturbances to be correlated, forming either an autoregressive system or a moving-average system. These modifications greatly increase the complexity of the model. Lagged values of the dependent variable appearing among the independent variables when coupled with correlated disturbances make the asymptotic theory associated with the application of classical statistical procedures far more difficult. The same combination also makes the differentiation required in this application more difficult.
Our work then with these two variations of the basic linear-regression model require applications of the results and concepts discussed in the first four chapters. Particularly useful in this context will be the properties of shifting matrices and generalized vec operators discussed in Sections 3.7 and 2.4, respectively.
This book concerns itself with the mathematics behind the application of classical statistical procedures to econometric models. I first tried to apply such procedures in 1983 when I wrote a book with Roger Bowden on instrumental variable estimation. I was impressed with the amount of differentiation involved and the difficultly I had in recognizing the end product of this process. I thought there must be an easier way of doing things. Of course at the time, like most econometricians, I was blissfully unaware of matrix calculus and the existence of zero-one matrices. Since then several books have been published in these areas showing us the power of these concepts. See, for example Graham (1981), Magnus (1988), Magnus and Neudecker (1999), and Lutkepohl (1996).
This present book arose when I set myself two tasks: first, to make my self a list of rules of matrix calculus that were most useful in applying classical statistical procedures to econometrics; second, to work out the basic building blocks of such procedures – the score vector, the information matrix, and the Cramer–Rao lower bound – for a sequence of econometric models of increasing statistical complexity. I found that the mathematics involved working with operators that were generalizations of the well-known vec operator, and that a very simple zero-one matrix kept cropping up. I called the matrix a shifting matrix for reasons that are obvious in the book.
On a scale of statistical complexity, the seemingly unrelated regression equations (SURE) model is one step up from the linear-regression model. The essential feature that distinguishes the two models is that in the former model the disturbances are contemporaneously correlated whereas in the latter model the disturbances are assumed independent.
In this chapter, we apply classical statistical procedures to three variations of the SURE model: First we look at the standard model; then, as we did with the linear-regression model, we look at two versions of the model in which the disturbances are subject to vector autoregressive processes and vector movingaverage processes. In our analysis, we shall find that our work on generalized vecs and devecs covered in Sections 2.4 and 4.7 particularly relevant. The duplication matrix discussed in Section 3.5 will make an appearance as will elimination matrices (Section 3.4).
From the practice established in the previous chapter, the asymptotic analysis needed in the evaluation of information matrices is given in Appendix 6.A at the end of the chapter, in which appropriate assumptions are made about the existence of certain probability limits. We can obtain the matrix calculus rules used in the differentiation of this chapter by referring to the tables at the end of Chap. 4.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.