In Chapters 33 and 34 we described three methods for approximating posterior distributions: the Laplace method, the Markov chain Monte Carlo (MCMC) method, and the expectation-propagation (EP) method. Given an observable y and a latent variable z, the Laplace method approximates fz∣y(z∣y) by a Gaussian distribution and was seen to be suitable for problems with small-dimensional latent spaces because its implementation involves a matrix inversion. The Gaussian approximation, however, is not sufficient in many instances and can perform poorly. The MCMC method is more powerful, and also more popular, and relies on elegant sampling techniques and the Metropolis–Hastings algorithm. However, MCMC requires a large number of samples, does not perform well on complex models, and does not scale well to higher dimensions and large datasets. The EP method, on the other hand, limits the class of distributions from which the posterior is approximated to the Gaussian or exponential families, and can be analytically demanding. In this chapter, we develop a fourth powerful method for posterior approximation known as variational inference. One of its advantages is that it usually scales better to large datasets and large dimensions.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.