The inference of a random variable x from observations {y1,y2,…,yN} requires that we evaluate the posterior distribution fx∣y1:N(x∣y1,…,yN) as happens, for example, in inference formulations based on mean-square-error (MSE), maximum a-posteriori (MAP), or probability of error metrics. In previous chapters, we described several techniques to facilitate the computation or approximation of such posterior distributions using Monte Carlo or variational inference methods. We will encounter other types of approximations in later chapters. For example, in the context of naïve Bayes classifiers in Chapter 55, we will assume that, conditioned on the latent variable x, the observations are independent of each other in order to write
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.