Hostname: page-component-89b8bd64d-x2lbr Total loading time: 0 Render date: 2026-05-07T00:34:47.599Z Has data issue: false hasContentIssue false

Computing Optimal Policies for Markovian Decision Processes Using Simulation

Published online by Cambridge University Press:  27 July 2009

Apostolos N. Burnetas
Affiliation:
Weatherhead School of Management, Case Western Reserve University, Cleveland, Ohio 44106
Michael N. Katehakis
Affiliation:
Graduate School of Management and RUTCOR Rutgers University, 92 New Street, Newark, New Jersey 07102-1895

Abstract

A simulation method is developed for computing average reward optimal policies, for a finite state and action Markovian decision process. It is shown that the method is consistent; i.e., it produces solutions arbitrarily close to the optimal. Various types of estimation errors and confidence bounds are examined. Finally, it is shown that the probability distribution of the number of simulation cycles required to compute an e-optimal policy satisfies a large deviations property.

Information

Type
Research Article
Copyright
Copyright © Cambridge University Press 1995

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable