To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book presents the reader with new operators and matrices that arise in the area of matrix calculus. The properties of these mathematical concepts are investigated and linked with zero-one matrices such as the commutation matrix. Elimination and duplication matrices are revisited and partitioned into submatrices. Studying the properties of these submatrices facilitates achieving new results for the original matrices themselves. Different concepts of matrix derivatives are presented and transformation principles linking these concepts are obtained. One of these concepts is used to derive new matrix calculus results, some involving the new operators and others the derivatives of the operators themselves. The last chapter contains applications of matrix calculus, including optimization, differentiation of log-likelihood functions, iterative interpretations of maximum likelihood estimators and a Lagrangian multiplier test for endogeneity.
This 2007 book provides a systematic and self-contained account of the fast-developing theory of complex social networks. Social networks are central to the understanding of most socio-economic phenomena in the modern world. The classical approach to studying them relies on a methodology that abstracts from their size and complexity. In contrast, the approach taken in this book keeps complexity at the core, whilst integrating it with the incentive considerations that are preeminent in traditional economic analysis. The treatment starts with a detailed discussion of the basic models that act as 'benchmarks' for the complex-network literature: random networks, small worlds, and scale-free networks, before studying three different forces that underlie almost all network phenomena in social contexts: diffusion, search, and play. Finally, these forces are combined into a unified framework that is brought to bear on the issue of network formation and the coevolution of agents' behaviour and their pattern of interaction.
This book provides a general framework for specifying, estimating and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalised method of moments estimation, nonparametric estimation and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
We exhibit seven linear codes exceeding the current best known minimum distance $d$ for their dimension $k$ and block length $n$. Each code is defined over ${ \mathbb{F} }_{8} $, and their invariants $[n, k, d] $ are given by $[49, 13, 27] $, $[49, 14, 26] $, $[49, 16, 24] $, $[49, 17, 23] $, $[49, 19, 21] $, $[49, 25, 16] $ and $[49, 26, 15] $. Our method includes an exhaustive search of all monomial evaluation codes generated by points in the $[0, 5] \times [0, 5] $ lattice square.
Under certain conditions known as regularity conditions, the maximum likelihood estimator introduced in Chapter 1 possesses a number of important statistical properties and the aim of this chapter is to derive these properties. In large samples, the maximum likelihood estimator is consistent, efficient and normally distributed. In small samples, it satisfies an invariance property, is a function of sufficient statistics and in some, but not all, cases, is unbiased and unique. As the derivation of analytical expressions for the finite-sample distributions of the maximum likelihood estimator is generally complicated, computationally intensive methods based on Monte Carlo simulations or series expansions are used to examine some of these properties.
The maximum likelihood estimator encompasses many other estimators often used in econometrics, including ordinary least squares and instrumental variables (Chapter 5), nonlinear least squares (Chapter 6), the Cochrane-Orcutt method for the autocorrelated regression model (Chapter 7), weighted least squares estimation of heteroskedastic regression models (Chapter 8) and the Johansen procedure for cointegrated nonstationary time series models (Chapter 18).
Preliminaries
Before deriving the formal properties of the maximum likelihood estimator, four important preliminary concepts are reviewed. The first presents some stochastic models of time series and briefly discusses their properties. The second is concerned with the convergence of a sample average to its population mean as T →∞, known as the weak law of large numbers. The third identifies the scaling factor ensuring convergence of scaled random variables to nondegenerate distributions.
This chapter addresses time series models that are nonlinear in the variance. It transpires that the variance of the returns of financial assets, commonly referred to as the volatility, is a crucial aspect of much of modern finance theory, because it is a key input to areas such as portfolio construction, risk management and option pricing. In this chapter, the particular nonlinear variance specification investigated is the autoregressive conditional heteroskedasticity (ARCH) class of models introduced by Engle (1982). This model also represents a special case of heteroskedastic regression models discussed in Chapter 8 where lags of the dependent variable are now included as explanatory variables of the variance.
As in the case with nonlinear models in the mean, however, a wide range of potential nonlinearities can be entertained when modelling the variance. There are two other important approaches to modelling the variance of financial asset returns which are only briefly touched on. The first is the stochastic volatility model, introduced by Taylor (1982) and discussed in Chapters 9 and 12. The second is realised volatility proposed by Andersen, Bollerslev, Diebold and Labys (2001, 2003) which is only explored in the context of the MIDAS model of Ghysels, Santa-Clara and Valkanov (2005) in Exercise 10 of this chapter.
Statistical Properties of Asset Returns
Panel (a) of Figure 20.1 provides a plot of the returns of the daily percentage returns, yt, on the FTSE from 5 January 1989 to 31 December 2007, T = 4952. At first sight, the returns appear to be random, a point highlighted in panel (c), which shows that the autocorrelation function of returns is flat. Closer inspection of the returns reveals periods when returns hardly change (market tranquility) and others where large movements in returns are followed by further large changes (market turbulence).
The class of models discussed in Parts ONE and TWO of the book assume that the specification of the likelihood function, in terms of the joint probability distribution of the variables, is correct and that the regularity conditions set out in Chapter 2 are satisfied. Under these conditions, the maximum likelihood estimator has the desirable properties discussed in Chapter 2, namely that it is consistent, asymptotically normally distributed and asymptotically efficient because in the limit it achieves the Cramér-Rao lower bound given by the inverse of the information matrix.
This chapter addresses the problem investigated by White (1982), namely maximum likelihood estimation when the likelihood function is misspecified. In general, the maximum likelihood estimator in the presence of misspecification does not display the usual properties. However, there are a number of important special cases in which the maximum likelihood estimator of a misspecified model still provides a consistent estimator for some of the population parameters in the true model. As the maximum likelihood estimator is based on a misspecified model, this estimator is referred to as the quasi-maximum likelihood estimator. Perhaps the most important case is the estimation of the conditional mean in the linear regression model, discussed in detail in Part TWO, where potential misspecifications arise from assuming either normality,
or constant variance, or independence.
One important difference between the maximum likelihood estimator based on the true probability distribution and the quasi-maximum likelihood estimator is that the usual estimator of the variance derived in Chapter 2 and based on the information matrix equality holding, is no longer appropriate for the quasimaximum likelihood estimator.
The class of linear regression models with normal disturbances discussed in Chapter 5 is now extended to allow for nonlinearities. Three types of extensions are investigated. The first is where the exogenous variable xt is specified as a nonlinear function. The second is where the dependent variable yt is specified as a nonlinear function. The third is where the disturbance term ut is specified to have a non-normal distribution. Nonlinear specifications of time series models are discussed in Part SIX where nonlinearities in the conditional mean are investigated in Chapter 19, nonlinearities in the conditional variance are discussed in Chapter 20 and nonlinearities arising from models where the dependent variable is a discrete random variable are discussed in Chapter 21.
As with the treatment of linear regression models in the previous chapter, nonlinear regression models are examined within the maximum likelihood frame work. Establishing this link ensures that methods typically used to estimate nonlinear regression models, including Gauss-Newton, nonlinear least squares and robust estimators, immediately inherit the same asymptotic properties as the maximum likelihood estimator. Moreover, it is also shown that many of the statistics used to test nonlinear regression models are special cases of the LR, Wald or LM tests discussed in Chapter 4. An important example of this property is a non-nested test used to discriminate between models that is based on a variation of a LR test.
The maximum likelihood framework set out in Part ONE is now applied to estimating and testing regression models. This chapter focusses on linear models, where the conditional mean of a dependent variable is specified to be a linear function of a set of exogenous variables. Extensions to this basic model are investigated in Chapter 6 (nonlinear regression), Chapter 7 (autocorrelation) and Chapter 8 (heteroskedasticity).
Single equation models include the linear regression model and the constant mean model. For single equation regression models, the maximum likelihood estimator has an analytical solution that is equivalent to the ordinary least squares estimator. The class of multiple equation models includes simultaneous equation models with multiple dependent and exogenous variables, seemingly unrelated systems and recursive models. In this instance, the maximum likelihood estimator is known as the full information maximum likelihood (FIML) estimator because the entire system is used to estimate all of the model parameters jointly. The FIML estimator is related to the instrumental variable estimator commonly used to estimate simultaneous models and, in some cases, the two estimators are equivalent. Unlike linear single equation models, analytical solutions of the maximum likelihood estimator for systems of linear equations are only available in certain special cases.
Many of the examples considered in Part ONE specify the distribution of the observable random variable, yt. Regression models, by contrast, specify the distribution of the unobservable disturbance, ut, which means that maximum likelihood estimation cannot be used directly because this method requires evaluating the log-likelihood function at the observed values of the data.
The regression models considered in Chapters 5 to 7 allow for the mean of the distribution of the dependent variable to vary over time by specifying the mean as a function of a set of exogenous variables. An important feature of these models is that the mean is specified to be time-varying but the variance is assumed to be constant, or homoskedastic. A natural extension of homoskedastic regression models, therefore, is to specify the variance as a function of a set of exogenous variables thereby allowing the variance to be time-varying as well. This class of model is referred to as the heteroskedastic regression model.
In this chapter, the maximum likelihood framework is applied to estimating and testing the heteroskedastic regression model. More general models, in which both heteroskedasticity and autocorrelation structures are present in systems of equations by combining the variance specifications of this chapter with the autocorrelation specifications of Chapter 7, are also considered. In specifying this class of model, the parametric form of the distribution of the disturbances is usually assumed to be normal but this assumption can also be relaxed.
As with the autocorrelated regression model, estimators and testing procedures commonly applied to the heteroskedastic regression model are shown to be special cases of the maximum likelihood framework developed in Part ONE. The estimators that are discussed include weighted least squares and zig-zag algorithms, while the tests that are covered include the Breusch-Pagan and White tests of heteroskedasticity.