To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most researchers find the logic of Bayesian analysis compelling, once a prior has been specified. The stage of the process where most frequentists can be found circling the wagons is when the prior is chosen.
The pure subjectivist engaged in personal research needs only to elicit the prior that reflects his or her subjective prior beliefs. Usually the likelihood is parameterized to facilitate thinking in terms of θ, and so subject matter considerations should suggest plausible values of θ. Elicitation techniques are nicely surveyed by Garthwaite, Kadane, and O'Hagan (2005). These techniques tend to advocate thinking in terms of beliefs concerning future observables and backing out the implied beliefs regarding the hyperparameters of a conjugate prior for the unobserved parameters. Unlike most researchers, economists seem equally adept at thinking in terms of observables or unobservable parameters. Perhaps this is because the econometrician is both the statistician and the substantive field expert.
But why should prior beliefs conform to the conjugate prior form? One reason is that natural conjugate priors have an interpretation in terms of a prior fictitious sample from the same process that gives rise to the likelihood function. This corresponds to organizing prior beliefs by viewing the observable world through the same parametric window used for viewing the data.
Public research, however, inevitably requires prior sensitivity analysis and the elicitation of a family of priors likely to interest a wide range of readers.
Bayesian econometrics has enjoyed an increasing popularity in many fields. This popularity has been evidenced through the recent publication of several textbooks at the advanced undergraduate and graduate levels, including those by Poirier (1995), Bauwens, Lubrano, and Richard (1999), Koop (2003), Lancaster (2004), and Geweke (2005). The purpose of the present volume is to provide a wide range of exercises and solutions suitable for students interested in Bayesian econometrics at the level of these textbooks.
The Bayesian researcher should know the basic ideas underlying Bayesian methodology (i.e., Bayesian theory) and the computational tools used in modern Bayesian econometrics (i.e., Bayesian computation). The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We have written this book with these three activities – theory, computation, and empirical modeling – in mind. We have tried to construct a wide range of exercises on all of these aspects. Loosely speaking, Chapters 1 through 9 focus on Bayesian theory, whereas Chapter 11 focuses primarily on recent developments in Bayesian computation. The remaining chapters focus on particular models (usually regression based). Inevitably, these chapters combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics.
In this chapter we present various exercises that involve nonstationary time series. In particular, we focus on the topics of unit roots and cointegration. In one sense, there is nothing new in the Bayesian treatment of nonstationary variables. For instance, unit root issues are often addressed in the context of AR (or ARMA) models. Cointegration issues are often addressed using restricted versions of VARs. In Chapter 17, we discussed AR and VAR models and the methods derived there were not dependent on the variables being stationary. However, with nonstationary data, some important contrasts exist between Bayesian and frequentist approaches and some important issues of prior elicitation arise. Hence, we devote a separate chapter to nonstationary time series models.
Exercises 18.1 and 18.2 illustrate some of the differences between Bayesian and frequentist results with unit root variables. Exercise 18.1 derives the finite-sample posterior distribution for the AR(1) model with AR coefficient θ. For the reader with no frequentist training in time series, we note that it contrasts sharply with the sampling distribution of the maximum likelihood estimator. The latter is quite complicated and its asymptotic distribution differs markedly depending on whether θ < 1, θ = 1, or θ > 1. Exercise 18.2 reproduces the enlightening Monte Carlo experiment of Sims and Uhlig (1991), which demonstrates the differences between posterior and sampling distributions in finite samples. These differences also persist asymptotically because the convergence of the asymptotic sampling distribution is not uniform [see Kwan (1998)].