To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The maximum likelihood method of estimation is based on specifying a likelihood function that, in turn, requires specifying a particular form for the joint distribution of the underlying random variables. This requirement is now relaxed so that the model rests only on the specification of moments of certain functions of the random variables, in an approach known as the generalised method of moments (GMM). In the case where the moments used in the GMM procedure correspond to the distribution specified in the maximum likelihood procedure, the two estimators are equivalent. In essence, the choice between maximum likelihood and GMM then boils down to a trade-off between the statistical efficiency of a maximum likelihood estimator based on the full distribution against the ease of specification and the robustness of a GMM estimator based only on certain moments.
GMM is often a natural estimation framework in economics and finance because the moments of a model often correspond to the first-order conditions of a dynamic optimisation problem. Moreover, as theory tends to provide little or even no guidance on the specification of the distribution, computing maximum likelihood estimators requires making potentially ad hoc assumptions about the underlying stochastic processes. This is not the case with using GMM. On the other hand, GMM estimation requires the construction of a sufficient number of moment conditions by choosing instruments that may not be directly related to the theoretical model.
Maximum likelihood estimation is a general method for estimating the parameters of econometric models from observed data. The principle of maximum likelihood plays a central role in the exposition of this book, since a number of estimators used in econometrics can be derived within this framework. Examples include ordinary least squares, generalised least squares and full information maximum likelihood. In deriving the maximum likelihood estimator, a key concept is the joint probability density function (pdf) of the observed random variables, yt. Maximum likelihood estimation requires that the following conditions are satisfied.
(1) The form of the joint pdf of yt is known.
(2) The specifications of the moments of the joint pdf are known.
(3) The joint pdf can be evaluated for all values of the parameters, θ.
Parts ONE and TWO of this book deal with models in which all these conditions are satisfied. Part THREE investigates models in which these conditions are not satisfied and considers four important cases. First, if the distribution of yt is misspecified, resulting in both conditions 1 and 2 being violated, estimation is by quasi-maximum likelihood (Chapter 9). Second, if condition 1 is not satisfied, a generalised method of moments estimator (Chapter 10) is required. Third, if condition 2 is not satisfied, estimation relies on nonparametric methods (Chapter 11). Fourth, if condition 3 is violated, simulation-based estimation methods are used (Chapter 12).
In most of the models previously discussed, the dependent variable, yt, is assumed to be a continuous random variable. There are a number of situations where the continuity assumption is inappropriate and alternative classes of models must be specified to explain the time series features of discrete random variables. This chapter reviews the important class of discrete time series models commonly used in microeconometrics namely the probit, ordered probit and Poisson regression models. It also discusses some recent advances in the modelling of discrete random variables with particular emphasis on the binomial thinning model of Steutel and Van Harn (1979) and the Autoregressive Conditional Duration (ACD) model of Engle and Russell (1998), together with some of its extensions.
Motivating Examples
Recent empirical research in financial econometrics has emphasised the importance of discrete random variables. Here, data on the number of trades and the duration between trades are recorded at very high frequencies. The examples which follow all highlight the need for econometric models that deal with discrete random variables by preserving the distributional characteristics of the data.
Example 21.1 Transactions Data on Trades
Table 21.1 provides a snapshot of transactions data recorded every second, on the United States stock AMR, the parent company of American Airlines, on 1 August 2006. Three examples of discrete random variables can be obtained from the data in Table 21.1.
The maximum likelihood framework presented in Part ONE is now applied to estimating and testing a general class of dynamic models known as stationary time series models. Both univariate and multivariate models are discussed. The dynamics enter the model in one of two ways. The first is through lags of the variables, referred to as the autoregressive part, and the second is through lags of the disturbance term, referred to as the moving average part. In the case where the dynamics of a single variable are being modelled, these models are referred to as autoregressive moving average (ARMA) models. In the multivariate case, where the dynamics of multiple variables are modelled, these models are referred to as vector autoregressive moving average (VARMA) models. Jointly, these models are called stationary time series models where stationarity refers to the types of dynamics allowed for. The case of nonstationary dynamics is discussed in Part FIVE.
The specification of dynamics through the inclusion of lagged variables and lagged disturbances is not new. It was discussed in Part TWO in the context of the linear regression model in Chapter 5 and more directly in Chapter 7 where autoregressive and moving-average dynamics were specified in the context of the autocorrelated regression model. In fact, a one-to-one relationship exists between the VARMA class of models investigated in this chapter and the structural class of regression models of Part TWO, where the VARMA model is interpreted as the reduced form of a structural model.