Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-06-02T05:55:12.024Z Has data issue: false hasContentIssue false

12 - Newton and quasi-Newton methods

from PART III - COMPUTATIONAL TECHNIQUES

Published online by Cambridge University Press:  18 December 2009

John M. Lewis
Affiliation:
National Severe Storms Laboratory, Oklahoma
S. Lakshmivarahan
Affiliation:
University of Oklahoma
Sudarshan Dhall
Affiliation:
University of Oklahoma
Get access

Summary

It was around 1660 Newton discovered the method for solving nonlinear equations that bears his name. Shortly thereafter – around 1665 – he also developed the secant method for solving nonlinear equations. Since then these methods have become a part of the folklore in numerical analysis (see Exercises 12.1 and 12.2). In addition to solving nonlinear equations, these methods can also be applied to the problem of minimizing a nonlinear function. In this chapter we provide an overview of the classical Newton's method and many of its modern relatives called quasi-Newton methods for unconstrained minimization. The major advantage of the Newton's method is its quadratic convergence (Exercise 12.3) but in finding the next descent direction it requires solution of a linear system which is often a bottleneck. Quasi-Newton methods are designed to preserve the good convergence properties of the Newton's method while they provide considerable relief from this computational bottleneck. Quasi-Newton methods are extensions of the secant method. Davidon was the first to revive the modern interest in quasi-Newton methods in 1959 but his work remained unpublished till 1991. However, Fletcher and Powell in 1963 published Davidon's ideas and helped to revive this line of approach to designing efficient minimization algorithms.

The philosophy and practice that underlie the design of quasi-Newton methods underscore the importance of the trade - off between rate of convergence and computational cost and storage.

Type
Chapter
Information
Dynamic Data Assimilation
A Least Squares Approach
, pp. 209 - 224
Publisher: Cambridge University Press
Print publication year: 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×