Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Stochastic Models and Bayesian Filtering
- Part II Partially Observed Markov Decision Processes: Models and Applications
- Part III Partially Observed Markov Decision Processes: Structural Results
- Part IV Stochastic Approximation and Reinforcement Learning
- 15 Stochastic optimization and gradient estimation
- 16 Reinforcement learning
- 17 Stochastic approximation algorithms: examples
- 18 Summary of algorithms for solving POMDPs
- Appendix A Short primer on stochastic simulation
- Appendix B Continuous-time HMM filters
- Appendix C Markov processes
- Appendix D Some limit theorems
- References
- Index
17 - Stochastic approximation algorithms: examples
from Part IV - Stochastic Approximation and Reinforcement Learning
Published online by Cambridge University Press: 05 April 2016
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Stochastic Models and Bayesian Filtering
- Part II Partially Observed Markov Decision Processes: Models and Applications
- Part III Partially Observed Markov Decision Processes: Structural Results
- Part IV Stochastic Approximation and Reinforcement Learning
- 15 Stochastic optimization and gradient estimation
- 16 Reinforcement learning
- 17 Stochastic approximation algorithms: examples
- 18 Summary of algorithms for solving POMDPs
- Appendix A Short primer on stochastic simulation
- Appendix B Continuous-time HMM filters
- Appendix C Markov processes
- Appendix D Some limit theorems
- References
- Index
Summary
This final chapter, presents four case studies of stochastic approximation algorithms in state/parameter estimation and modeling in the context of POMDPs.
Example 1 discusses online estimation of the parameters of an HMM using the recursive maximum likelihood estimation algorithm. The motivation stems from classical adaptive control: the parameter estimation algorithm can be used to estimate the parameters of the POMDP for a fixed policy; then the policy can be updated using dynamic programming (or approximation) based on the parameters and so on.
Example 2 shows that for an HMM comprised of a slow Markov chain, the least mean squares algorithm can provide satisfactory state estimates of the Markov chain without any knowledge of the underlying parameters. In the context of POMDPs, once the state estimates are known, a variety of suboptimal algorithms can be used to synthesize a reasonable policy.
Example 3 shows how discrete stochastic optimization problems can be solved via stochastic approximation algorithms. In controlled sensing, such algorithms can be used to compute the optimal sensing strategy from a finite set of policies.
Example 4 shows how large-scale Markov chains can be approximated by a system of ordinary differential equations. This mean field analysis is illustrated in the context of information diffusion in a social network. As a result, a tractable model can be obtained for state estimation via Bayesian filtering.
We also show how consensus stochastic approximation algorithms can be analyzed using standard stochastic approximation methods.
A primer on stochastic approximation algorithms
This section presents a rapid summary of the convergence analysis of stochastic approximation algorithms. Analyzing the convergence of stochastic approximation algorithms is a highly technical area. The books [48, 305, 200] are seminal works that study the convergence of stochastic approximation algorithms under general conditions. Our objective here is much more modest. We merely wish to point out the final outcome of the analysis and then illustrate how this analysis can be applied to the four case studies relating to POMDPs.
Consider a constant step size stochastic approximation algorithms of the form
θk+1 = θk + ∈ H(θk, xk), k= 0, 1
where {θk} is a sequence of parameter estimates generated by the algorithm, ∈ is small positive fixed step size, and xk is a discrete-time geometrically ergodic Markov process (continuous or discrete state) with transition kernel P(θk) and stationary distribution πθk.
- Type
- Chapter
- Information
- Partially Observed Markov Decision ProcessesFrom Filtering to Controlled Sensing, pp. 380 - 424Publisher: Cambridge University PressPrint publication year: 2016