Parameter estimation is generally difficult, requiring advanced methods such as the expectation-maximization (EM). This chapter focuses on the ideas behind EM, rather than its complex mathematical properties or proofs. We use the Gaussian mixture model (GMM) as an illustrative example to find what leads us to the EM algorithms, e.g., complete and incomplete data likelihood, concave and nonconcave loss functions, and observed and hidden variables. We then derive the EM algorithm in general and its application to GMM.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.