Introduction
To solve a static, constrained optimization problem, we typically employ the Lagrange multiplier method. The solution to the problem is characterized by the first-order conditions of the Lagrange problem. To solve a stochastic, intertemporal optimization problem, the optimal control policy is characterized by the first-order conditions of the Bellman equation. In this chapter we shall introduce this method of dynamic optimization under uncertainty. One of the objectives is to make the reader feel as comfortable using the Bellman equation in dynamic models as using the Lagrange multiplier method in static models.
The chapter begins with a one-sector optimal growth model. We go through the derivation of the corresponding Bellman equation step by step to convey the mathematical reasoning behind this powerful tool using a real economic problem, not an abstract mathematical formulation. Then we examine the mathematical structure of the stochastic optimization problem, including the existence of the optimal control, the differentiability of the value function, the transversality condition, and the verification theorem. More importantly, we summarize the Bellman equation in a cookbook fashion to make it easy to use. To illustrate, we apply the Bellman equation to several well-known models, such as portfolio selection, index bonds, exhaustible resources, adjustment costs, and life insurance. To make the presentation self-contained, we provide a brief introduction to each topic.
Then, we extend the method from time-additive utility functions with a constant discount rate to a class of recursive utility functions.