To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Suppose one has a set of data that arises from a specific distribution with unknown parameter vector. A natural question to ask is the following: what value of this vector is most likely to have generated these data? The answer to this question is provided by the maximum-likelihood estimator (MLE). Likelihood and related functions are the subject of this chapter. It will turn out that we have already seen some examples of MLEs in the previous chapters. Here, we define likelihood, the score vector, the Hessian matrix, the information-matrix equivalence, parameter identification, the Cramér–Rao lower bound and its extensions, profile (concentrated) likelihood and its adjustments, as well as the properties of MLEs (including conditions for existence, consistency, and asymptotic normality) and the score (including martingale representation and local sufficiency). Applications are given, including some for the normal linear model.
Abadir and Magnus (2002, Econometric Theory) proposed a standard for notation in econometrics. The consistent use of the proposed notation in our volumes shows that it is in fact practical. The notational conventions described here mainly apply to the material covered in this volume. Further notation will be introduced, as needed, as the Series develops.
There is a proliferation of methods of point estimation other than ML. First, MLEs may not have an explicit formula and may be computationally more demanding than alternatives. Second, MLEs typically require the specification of a distribution. Third, optimization of criteria other than the likelihood may have some justification. The first argument has become less relevant with the advent of fast computers, and the alternative estimators based on it usually entail a loss of optimality properties. The second can be countered to some extent with large-sample invariance arguments or with the nonparametric MLE and empirical likelihood seen earlier. However, the third reason can be more fundamental.This chapter presents a selection of four common methods of point estimation, addressing the reasons outlined earlier, to varying degrees: method of moments, least squares, nonparametric (density and regression), and Bayesian estimation methods. In addition to these reasons for alternative estimators, point estimation itself may not be the most informative way to summarize what the data indicate about the parameters. Therefore, the chapter also introduces interval estimation and its multivariate generalization, a topic that leads quite naturally to the subject matter of Chapter 14.
This chapter concerns the measurement of the dependence between variates, by exploiting the additional information contained in joint (rather than just marginal) distribution and density functions. For this multivariate context, we also generalize the third description of randomness seen earlier, i.e., moments and their generating functions.Joint moments and their generating functions are introduced, along with covariances, variance matrices, the Cauchy–Schwarz inequality, and joint c.f.s and their inversion into joint densities. We show how the law of iterated expectations makes use of conditioning when taking expectations with respect to more than one variate. We measure dependence via conditional densities, distributions, moments, and cumulants.