The mean-square-error (MSE) criterion (27.17) is one notable example of the Bayesian approach to statistical inference. In the Bayesian approach, both the unknown quantity, x, and the observation, y, are treated as random variables and an estimator x^ for x is sought by minimizing the expected value of some loss function denoted by Q(x,x^). In the previous chapter, we focused exclusively on the quadratic loss Q(x,x^)=(x−x^)2 for scalar x. In this chapter, we consider more general loss functions, which will lead to other types of inference solutions such as the mean-absolute error (MAE) and the maximum a-posteriori (MAP) estimators. We will also derive the famed Bayes classifier as a special case when the realizations for x are limited to the discrete values x∈{±1}.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.