Hostname: page-component-848d4c4894-5nwft Total loading time: 0 Render date: 2024-05-15T15:18:15.604Z Has data issue: false hasContentIssue false

Optimal stationary policies for denumerable Markov chains in continuous time

Published online by Cambridge University Press:  01 July 2016

John Bather*
Affiliation:
University of Sussex

Abstract

This paper is concerned with the problem of selecting the transition intensities for a Markov chain in continuous time so as to minimise the long-term average cost. Sufficient conditions are established for an optimal stationary policy using unbounded solutions of the optimality equation. This is a development of recent work on Markovian decision processes in discrete time. The theory is illustrated by considering a simple birth and death process with controlled immigration.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 1976 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Bather, J. (1973) Optimal decision procedures for finite Markov chains. Adv. Appl. Prob. 5, 328540, 541–553.CrossRefGoogle Scholar
[2] Derman, C. (1966) Denumerable state Markovian decision processes—average cost criterion. Ann. Math. Statist. 37, 15451554.Google Scholar
[3] Feller, W. (1966) An Introduction to Probability Theory and its Applications, Vol. II. Wiley, New York.Google Scholar
[4] Hordijk, A. (1974) Dynamic programming and Markov potential theory. Mathematical Centre Tracts. No. 51. Amsterdam.Google Scholar
[5] Howard, R. A. (1960) Dynamic Programming and Markov Processes. Wiley, New York.Google Scholar
[6] Miller, B. L. (1968) Finite state continuous time Markov decision processes with a finite planning horizon. SIAM J. Control 6, 266280.Google Scholar
[7] Miller, B. L. (1968) Finite state continuous time Markov decision processes with an infinite planning horizon. J. Math. Anal. Appl. 22, 552569.Google Scholar
[8] Robinson, D. R. (1976) Markov decision chains with unbounded costs and applications to the control of queues. Adv. Appl. Prob. 8, 159176.Google Scholar
[9] Ross, S. M. (1970) Average cost semi-Markov decision processes. J. Appl. Prob. 7, 649656.Google Scholar