Book contents
- Frontmatter
- Contents
- Preface
- Introduction
- Part One Iterative Algorithms and Loop Invariants
- Part Two Recursion
- Part Three Optimization Problems
- 13 Definition of Optimization Problems
- 14 Graph Search Algorithms
- 15 Network Flows and Linear Programming
- 16 Greedy Algorithms
- 17 Recursive Backtracking
- 18 Dynamic Programming Algorithms
- 19 Examples of Dynamic Programs
- 20 Reductions and NP-Completeness
- 21 Randomized Algorithms
- Part Four Appendix
- Part five Exercise Solutions
- Index
13 - Definition of Optimization Problems
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Introduction
- Part One Iterative Algorithms and Loop Invariants
- Part Two Recursion
- Part Three Optimization Problems
- 13 Definition of Optimization Problems
- 14 Graph Search Algorithms
- 15 Network Flows and Linear Programming
- 16 Greedy Algorithms
- 17 Recursive Backtracking
- 18 Dynamic Programming Algorithms
- 19 Examples of Dynamic Programs
- 20 Reductions and NP-Completeness
- 21 Randomized Algorithms
- Part Four Appendix
- Part five Exercise Solutions
- Index
Summary
Many important and practical problems can be expressed as optimization problems. Such problems involve finding the best of an exponentially large set of solutions. It can be like finding a needle in a haystack. The obvious algorithm, considering each of the solutions, takes too much time because there are so many solutions. Some of these problems can be solved in polynomial time using network flow, linear programming, greedy algorithms, or dynamic programming. When not, recursive backtracking can sometimes find an optimal solution for some instances in some practical applications. Approximately optimal solutions can sometimes be found more easily. Random algorithms, which flip coins, sometimes have better luck. However, for the most optimization problems, the best known algorithm require 2Θ(n) time on the worst case input instances. The commonly held belief is that there are no polynomial-time algorithms for them (though we may be wrong). NP-completeness helps to justify this belief by showing that some of these problems are universally hard amongst this class of problems. I now formally define this class of problems.
Ingredients: An optimization problem is specified by defining instances, solutions, and costs.
Instances: The instances are the possible inputs to the problem.
Solutions for Instance: Each instance has an exponentially large set of solutions. A solution is valid if it meets a set of criteria determined by the instance at hand.
Measure of Success: Each solution has an easy-to-compute cost, value, or measure of success that is to be minimized or maximized.
- Type
- Chapter
- Information
- How to Think About Algorithms , pp. 171 - 172Publisher: Cambridge University PressPrint publication year: 2008