Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-xfwgj Total loading time: 0 Render date: 2024-06-15T22:47:27.267Z Has data issue: false hasContentIssue false

5 - Two problems with variational expectation maximisation for time series models

from II - Deterministic approximations

Published online by Cambridge University Press:  07 September 2011

Richard Eric Turner
Affiliation:
Neuroscience Unit, London
Maneesh Sahani
Affiliation:
Neuroscience Unit, London
David Barber
Affiliation:
University College London
A. Taylan Cemgil
Affiliation:
Boğaziçi Üniversitesi, Istanbul
Silvia Chiappa
Affiliation:
University of Cambridge
Get access

Summary

Introduction

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods, and yet generally requiring less computational time than Markov chain Monte Carlo methods. In particular the variational expectation maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time series modelling. Here, we investigate the success of vEM in simple probabilistic time series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such as mean-field) can lead to less bias than more complicated structured approximations.

The variational approach

We begin this chapter with a brief theoretical review of the variational expectation maximisation algorithm, before illustrating the important concepts with a simple example in the next section. The vEM algorithm is an approximate version of the expectation maximisation (EM) algorithm [4]. Expectation maximisation is a standard approach to finding maximum likelihood (ML) parameters for latent variable models, including hidden Markov models and linear or non-linear state space models (SSMs) for time series.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] M. J., Beal. Variational Algorithms for approximate Bayesian Inference. PhD thesis, University College London, May 1998.
[2] C., Bishop. Pattern Recognition and Machine Learning. Springer, 2006.Google Scholar
[3] S., Boyd and L., Vandenberghe. Convex Optimization. Cambridge University Press, March 2004.Google Scholar
[4] A. P., Dempster. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38, 1977.Google Scholar
[5] R., Hathaway. Another interpretation of the EM algorithm for mixture distributions. Statistics and Probability Letters, 4:53–56, 1986.Google Scholar
[6] T., Jaakkola and M., Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10(1):25–37, January 2000.Google Scholar
[7] T. S., Jaakkola and M. I., Jordan. Improving the mean field approximation via the use of mixture distributions. In Learning in Graphical Models, pages 163–173. MIT Press, 1999.Google Scholar
[8] M. I., Jordan, Z., Ghahramani, T. S., Jaakkola and L. K., Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999.Google Scholar
[9] D. J. C., MacKay. A problem with variational free energy minimization. 2001.
[10] D. J. C., MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. available from www.inference.phy.cam.ac.uk/mackay/itila/.Google Scholar
[11] R., Neal and G. E., Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, pages 355–368. Kluwer Academic Publishers, 1998.Google Scholar
[12] R. E., Turner, P., Berkes, M., Sahani and D. J. C., MacKay. Counter-examples to variational free-energy compactness folk theorems. Technical report, University College London, 2008.Google Scholar
[13] B., Wang and D. M., Titterington. Lack of consistency of mean field and variational Bayes approximations for state space models. Neural Processing Letters, 20(3):151–170, 2004.Google Scholar
[14] J., Winn and T., Minka. Expectation propagation and variational message passing: a comparison with infer.net. In Neural Information Processing Systems Workshop: Inference in continuous and hybrid models, 2007.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×