Article contents
Computing Optimal Policies for Markovian Decision Processes Using Simulation
Published online by Cambridge University Press: 27 July 2009
Abstract
A simulation method is developed for computing average reward optimal policies, for a finite state and action Markovian decision process. It is shown that the method is consistent; i.e., it produces solutions arbitrarily close to the optimal. Various types of estimation errors and confidence bounds are examined. Finally, it is shown that the probability distribution of the number of simulation cycles required to compute an e-optimal policy satisfies a large deviations property.
- Type
- Research Article
- Information
- Probability in the Engineering and Informational Sciences , Volume 9 , Issue 4 , October 1995 , pp. 525 - 537
- Copyright
- Copyright © Cambridge University Press 1995
References
1.Agrawal, R., Teneketzis, D., & Anantharam, V. (1989). Asymptotically efficient adaptive allocation schemes for controlled Markov chains: Finite parameter space. IEEE Transactions on Automated Control 34: 1249–1259.CrossRefGoogle Scholar
2.Burnetas, A.N. & Katehakis, M.N. (1992). On power one estimation from simulation of finite Markov chains. Technical Report 711-0003, Rutgers University, New Brunswick, NJ.Google Scholar
3.Burnetas, A.N. & Katehakis, M.N. (1994). Optimal adaptive policies for dynamic programming. Technical Report, Rutgers University, New Brunswick, NJ.Google Scholar
4.Crane, M.A. & Iglehart, D.L. (1974). Simulating stable stochastic systems, i: General multiserver queues. Journal of the Association for Computing Machines 21: 103–113.CrossRefGoogle Scholar
5.Crane, M.A. & Iglehart, D.L. (1974). Simulating stable stochastic systems, ii: Markov chains. Journal of the Association for Computing Machines 21: 114–123.CrossRefGoogle Scholar
6.Dembo, A. & Zeitouni, O. (1993). Large deviations techniques and applications. Jones and Bartlett.Google Scholar
7.Derman, C. (1970). Finite state Markovian decision processes. New York: Academic Press.Google Scholar
8.Ellis, R.S. (1985). Entropy, large deviations and statistical mechanics. New York: Springer-Verlag.CrossRefGoogle Scholar
9.Federgruen, A. & Schweitzer, P. (1981). Nonstationary Markov decision problems with converging parameters. Journal of Optimization Theory Applications 34: 207–241.CrossRefGoogle Scholar
10.Hernández-Lerma, O. (1989). Adaptive Markov control processes. New York: Springer-Verlag.CrossRefGoogle Scholar
11.Kumar, P.R. (1985). A survey of some results in stochastic adaptive control. SIAM Journal on Control and Optimization 23: 329–380.CrossRefGoogle Scholar
12.Ross, S.M. (1983). Introduction to stochastic dynamic programming. New York: Academic Press.Google Scholar
13.Ross, S.M. & Schechner, Z. (1985). Using simulation to estimate first passage distribution. Management Science 31(2): 224–234.CrossRefGoogle Scholar
14.Thomas, L.C., Harley, R. & Lavercombe, A.C. (1983). Computational comparisons of value iteration algorithms for discounted Markov decision processes. Operations Research Letters 2: 72–76.CrossRefGoogle Scholar
15.Van-Dijk, N.M. & Puterman, M.L. (1988). Perturbation theory for Markov reward processes with applications to queueing systems. Advances in Applied Probability 20: 79–98.CrossRefGoogle Scholar
- 1
- Cited by