THE FIRST SECTION of this chapter discusses general properties of posterior distributions. It continues with an explanation of how a Bayesian statistician uses the posterior distribution to conduct statistical inference, which consists of learning about parameter values either in the form of point or interval estimates, making predictions, and comparing alternative models.
Properties of Posterior Distributions
This section discusses general properties of posterior distributions, starting with the likelihood function. It continues by generalizing the concept to include models with more than one parameter and goes on to discuss the revision of posterior distributions as more data become available, the role of the sample size, and the concept of identification.
The Likelihood Function
As you have seen, the posterior distribution is proportional to the product of the likelihood function and the prior distribution. The latter is somewhat controversial and is discussed in Chapter 4, but the choice of a likelihood function is also an important matter and requires discussion. A central issue is that the Bayesian must specify an explicit likelihood function to derive the posterior distribution. In some cases, the choice of a likelihood function appears straightforward. In the coin-tossing experiment of Section 2.2, for example, the choice of a Bernoulli distribution seems natural, but it does require the assumptions of independent trials and a constant probability. These assumptions might be considered prior information, but they are conventionally a part of the likelihood function rather than of the prior distribution.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.