Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 3: Posterior Distributions and Inference

Chapter 3: Posterior Distributions and Inference

pp. 21-42

Authors

, Washington University, St Louis
Resources available Unlock the full potential of this textbook with additional resources. There are free resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Summary

THE FIRST SECTION of this chapter discusses general properties of posterior distributions. It continues with an explanation of how a Bayesian statistician uses the posterior distribution to conduct statistical inference, which consists of learning about parameter values either in the form of point or interval estimates, making predictions, and comparing alternative models.

Properties of Posterior Distributions

This section discusses general properties of posterior distributions, starting with the likelihood function. It continues by generalizing the concept to include models with more than one parameter and goes on to discuss the revision of posterior distributions as more data become available, the role of the sample size, and the concept of identification.

The Likelihood Function

As you have seen, the posterior distribution is proportional to the product of the likelihood function and the prior distribution. The latter is somewhat controversial and is discussed in Chapter 4, but the choice of a likelihood function is also an important matter and requires discussion. A central issue is that the Bayesian must specify an explicit likelihood function to derive the posterior distribution. In some cases, the choice of a likelihood function appears straightforward. In the coin-tossing experiment of Section 2.2, for example, the choice of a Bernoulli distribution seems natural, but it does require the assumptions of independent trials and a constant probability. These assumptions might be considered prior information, but they are conventionally a part of the likelihood function rather than of the prior distribution.

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$58.00
Hardback
US$75.00
Paperback
US$58.00

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers