Hostname: page-component-8448b6f56d-42gr6 Total loading time: 0 Render date: 2024-04-19T03:26:49.458Z Has data issue: false hasContentIssue false

A LEARNING ALGORITHM FOR DISCRETE-TIME STOCHASTIC CONTROL

Published online by Cambridge University Press:  01 April 2000

V. S. Borkar
Affiliation:
School of Technology and Computer Science, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India, E-mail: borkar@tifr.res.in

Abstract

A simulation-based algorithm for learning good policies for a discrete-time stochastic control process with unknown transition law is analyzed when the state and action spaces are compact subsets of Euclidean spaces. This extends the Q-learning scheme of discrete state/action problems along the lines of Baker [4]. Almost sure convergence is proved under suitable conditions.

Type
Research Article
Copyright
© 2000 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)