Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Doeringer, Willibald
1984.
Approximating general Markovian decision-problems by clustering their state- and action-spaces.
Mathematische Operationsforschung und Statistik. Series Optimization,
Vol. 15,
Issue. 1,
p.
135.
Klein Haneveld, W. K.
1985.
Convexity and Duality in Optimization.
Vol. 256,
Issue. ,
p.
95.
Klein Haneveld, Willem K.
1986.
Duality in Stochastic Linear and Dynamic Programming.
Vol. 274,
Issue. ,
p.
49.
Hübner, G.
1988.
A unified approach to adaptive control of average reward Markov decision processes.
OR Spektrum,
Vol. 10,
Issue. 3,
p.
161.
Hernandez-Lerma, O.
and
Lasserre, J.B.
1990.
Error bounds for rolling horizon policies in discrete-time Markov control processes.
IEEE Transactions on Automatic Control,
Vol. 35,
Issue. 10,
p.
1118.
1994.
Markov Decision Processes.
p.
613.
Holzbaur, Ulrich
1994.
Bounds for the quality and the number of steps in Bellman's value iteration algorithm.
OR Spektrum,
Vol. 15,
Issue. 4,
p.
231.
Hinderer, K.
and
Waldmann, K. H.
2005.
Algorithms for Countable State Markov Decision Models with an Absorbing Set.
SIAM Journal on Control and Optimization,
Vol. 43,
Issue. 6,
p.
2109.
Waldmann, Karl-Heinz
2006.
Decision Theory and Multi-Agent Planning.
p.
145.
Hinderer, Karl
Rieder, Ulrich
and
Stieglitz, Michael
2016.
Dynamic Optimization.
p.
199.