Book contents
- Frontmatter
- Contents
- Preface
- Guide to Notation and Terminology
- 1 Brownian Motion
- 2 Stochastic Storage Models
- 3 Further Analysis of Brownian Motion
- 4 Stochastic Calculus
- 5 Optimal Stopping of Brownian Motion
- 6 Reflected Brownian Motion
- 7 Optimal Control of Brownian Motion
- 8 Brownian Models of Dynamic Inference
- 9 Further Examples
- Appendix A Stochastic Processes
- Appendix B Real Analysis
- References
- Index
8 - Brownian Models of Dynamic Inference
Published online by Cambridge University Press: 05 December 2013
- Frontmatter
- Contents
- Preface
- Guide to Notation and Terminology
- 1 Brownian Motion
- 2 Stochastic Storage Models
- 3 Further Analysis of Brownian Motion
- 4 Stochastic Calculus
- 5 Optimal Stopping of Brownian Motion
- 6 Reflected Brownian Motion
- 7 Optimal Control of Brownian Motion
- 8 Brownian Models of Dynamic Inference
- 9 Further Examples
- Appendix A Stochastic Processes
- Appendix B Real Analysis
- References
- Index
Summary
A commonly used model of “sequential learning” is the following. A decision maker is initially uncertain about a parameter θ, and in each period t = 1,2,… he or she observes a random variable Xt = θ + ∈t, where {∈t} is a sequence of independent “noise” terms, each distributed N(0,1). Hereafter the random variable Xt will be referred to as a sample, and the process of observing successive Xt values will be called sampling. The strong law of large numbers says that (X1 + … + Xt)/t → θ almost surely as t → ∞, so the value of θ will eventually be revealed, but estimates of θ based on limited sampling are often important. For example, the decision maker may be considering an investment whose expected return depends on θ, and delaying the decision may have negative consequences because of discounting, or because there are direct costs associated with continued sampling. In such problems one is led to ask after each sample whether current knowledge of θ is “good enough” to justify an immediate acceptance or rejection of the investment opportunity, as opposed to continued sampling.
In the first five sections of this chapter we consider a continuous-time version of the model described above, in which the decision maker observes Yt = θt, + Wt, t ≥ 0, where W is a standard Brownian motion. We adopt a Bayesian framework, so the decision maker's initial information is expressed in the form of a prior distribution for θ, and the dynamic inference problem is to determine the posterior distribution of θ given {Ys, 0 ≤ s ≤ t}.
- Type
- Chapter
- Information
- Brownian Models of Performance and Control , pp. 137 - 154Publisher: Cambridge University PressPrint publication year: 2013