The principal complication that arises in extending the results of the last chapter to dynamic programming problems with an infinite horizon is that infinite-horizon models lack a “last” period; this makes it impossible to use backwards induction techniques to derive an optimal strategy. In this chapter, we show that general conditions may, nonetheless, be described for the existence of an optimal strategy in such problems, although the process of actually deriving an optimal strategy is necessarily more complicated. A final section then studies the application of these results to obtaining and characterizing optimal strategies in the canonical model of dynamic economic theory: the one-sector model of economic growth.
Description of the Framework
A (deterministic) stationary discounted dynamic programming problem (henceforth, SDP) is specified by a tuple {S, A, Φ, f, r, δ}, where
S is the state space, or the set of environments, with generic element s. We assume that S ⊂ ℝ n for some n.
A is the action space, with typical element a. We assume that A ⊂ ℝ k for some k.
Φ: S → P(A) is the feasible action correspondence that specifies for each s ∈ S the set Φ(s) ⊂ A of actions that are available at s.
f : S × A → S is the transition function for the state, that specifies for each current state-action pair (s, a) the next-period state f(s, a) ∈ S.
[…]
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.