Introduction
Dynamic programming evolved out of the extensive work of Richard Bellman in the 1950s. The method is generally applicable to problems that break up into stages and exhibit the Markovian property. A process exhibits the Markovian property if the decisions for optimal return at a stage in the process depend only on the current state of the system and the subsequent decisions. A variety of problems in engineering, economics, agriculture and science exhibit this property. Dynamic programming is the method of choice in many computer science problems, such as the Longest Common Subsequence problem that is used frequently by biologists to determine the longest common subsequence in a pair of DNA sequences; it is also the method of choice to determine the difference between two files.
When applicable, advantages of dynamic programming over other methods lie in being able to handle discrete variables, constraints, and uncertainty at each subproblem level as opposed to considering all aspects simultaneously in an entire decision model, and in sensitivity analysis. However, computer implementation via dynamic programming does require some problem-dedicated code writing.
Dynamic programming relies on the principle of optimality enunciated by Bellman. The principle defines an optimal policy.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
There are no purchase options available for this title.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.