Suppose one has a set of data that arises from a specific distribution with unknown parameter vector. A natural question to ask is the following: what value of this vector is most likely to have generated these data? The answer to this question is provided by the maximum-likelihood estimator (MLE). Likelihood and related functions are the subject of this chapter. It will turn out that we have already seen some examples of MLEs in the previous chapters. Here, we define likelihood, the score vector, the Hessian matrix, the information-matrix equivalence, parameter identification, the Cramér–Rao lower bound and its extensions, profile (concentrated) likelihood and its adjustments, as well as the properties of MLEs (including conditions for existence, consistency, and asymptotic normality) and the score (including martingale representation and local sufficiency). Applications are given, including some for the normal linear model.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.