To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is for people who want to solve ordinary differential equations (ODEs), both initial value problems (IVPs) and boundary value problems (BVPs) as well as delay differential equations (DDEs). Solving ODEs with MATLAB is a text for a one-semester course for upper-level undergraduates and beginning graduate students in engineering, science, and mathematics. Prerequisites are a first course in the theory of ODEs and a survey course in numerical analysis. Implicit in these prerequisites is some programming experience, preferably in Matlab, and some elementary matrix theory. Solving ODEs with MATLAB is also a reference for professionals in engineering, science, and mathematics. With it they can quickly obtain an understanding of the issues and see example problems solved in detail. They can use the programs supplied with the book as templates.
It is usual to teach the three topics of this book at an advanced level in separate courses of one semester each. Solving ODEs with MATLAB provides a sound treatment of all three topics in about 250 pages. This is possible because of the focus and level of the treatment. The book opens with a chapter called Getting Started. Next is a chapter on IVPs. These two chapters must be studied in order, but the remaining two chapters (on BVPs and DDEs) are independent of one another. It is easy to cover one of these chapters in a one-semester course, but the preparation and sophistication of the students will determine whether it is possible to do both.
Ordinary differential equations (ODEs) are used throughout engineering, mathematics, and science to describe how physical quantities change, so an introductory course on elementary ODEs and their solutions is a standard part of the curriculum in these fields. Such a course provides insight, but the solution techniques discussed are generally unable to deal with the large, complicated, and nonlinear systems of equations seen in practice. This book is about solving ODEs numerically. Each of the authors has decades of experience in both industry and academia helping people like yourself solve problems. We begin in this chapter with a discussion of what is meant by a numerical solution with standard methods and, in particular, of what you can reasonably expect of standard software. In the chapters that follow, we discuss briefly the most popular methods for important classes of ODE problems. Examples are used throughout to show how to solve realistic problems. Matlab (2000) is used to solve nearly all these problems because it is a very convenient and widely used problem-solving environment (PSE) with quality solvers that are exceptionally easy to use. It is also such a high-level programming language that programs are short, making it practical to list complete programs for all the examples. We also include some discussion of software available in other computing environments. Indeed, each of the authors has written ODE solvers widely used in general scientific computing.
In this book I present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods and they include popular methods such as Conjugate Gradients, MINRES, SYMMLQ, Bi-Conjugate Gradients, QMR, Bi-CGSTAB, CGS, LSQR, and GMRES. I will show how these methods can be derived from simple basic iteration formulas and how they are related. My focus is on the ideas behind the derivation of these methods, rather than on a complete presentation of various aspects and theoretical properties.
In the text there are a large number of references for more detailed information. Iterative methods form a rich and lively area of research and it is not surprising that this has already led to a number of books. The first book devoted entirely to the subject was published by Varga [212], it contains much of the theory that is still relevant, but it does not deal with the Krylov subspace methods (which were not yet popular at the time).
Other books that should be mentioned in the context of Krylov subspace methods are the ‘Templates’ book [20] and Greenbaum's book [101]. The Templates are a good source of information on the algorithmic aspects of the iterative methods and Greenbaum's text can be seen as the theoretical background for the Templates.
Axelsson [10] published a book that gave much attention to preconditioning aspects, in particular all sorts of variants of (block and modified) incomplete decompositions. The book by Saad [168] is also a good source of information on preconditioners, with much inside experience for such methods as threshold ILU.
In 1991 I was invited by Philippe Toint to give a presentation, on Conjugate Gradients and related iterative methods, at the university of Namur (Belgium). I had prepared a few-hand-written notes to guide myself through an old-fashioned presentation with blackboard and chalk. Some listeners asked for a copy of the notes and afterwards I heard from Philippe that they had been quite instructive for his students. This motivated me to work them out in LATEX and that led to the first seven or so pages of my lecture notes. I took the habit of expanding them before and after new lectures and after I had read new interesting aspects of iterative methods. Around 1995 I put the then about thirty pages on my website. They turned out to be quite popular and I received many suggestions for improvement and expansion, most of them by e-mail from various people: novices in the area, students, experts in this field, and users from other fields and industry.
For instance, research groups at Philips Eindhoven used the text for their understanding of iterative methods and they sometimes asked me to comment on certain novel ideas that they had heard of at conferences or picked up from literature. This led, amongst others, to sections on GPBi-CG, and symmetric complex systems. Discussions with colleagues about new developments inspired me to comment on these in my Lecture Notes and so I wrote sections on Simple GMRES and on the superlinear convergence of Conjugate Gradients.
A couple of years ago, I started to use these Lecture Notes as material for undergraduate teaching in Utrecht and I found it helpful to include some exercises in the text.
As we have seen in our discussions on the various Krylov subspace methods, they are not robust in the sense that they can be guaranteed to lead to acceptable approximate solutions within modest computing time and storage (modest with respect to alternative solution methods). For some methods (for instance, full GMRES) it is obvious that they lead, in exact arithmetic, to the exact solution in maximal n iterations, but that may not be very practical. Other methods are restricted to specific classes of problems (CG, MINRES) or suffer from such nasty side-effects as stagnation or breakdown (Bi-CG, Bi-CGSTAB). Such poor convergence depends in a very complicated way on spectral properties (eigenvalue distribution, field of values, condition of the eigensystem, etc.) and this information is not available in practical situations.
The trick is then to try to find some nearby operator K such that K−1A has better (but still unknown) spectral properties. This is based on the observation that for K = A, we would have the ideal system K−1Ax = I x = K−1b and all subspace methods would deliver the true solution in one singe step. The hope is that for K in some sense close to A a properly selected Krylov method applied to, for instance, K−1Ax = K−1b, would need only a few iterations to yield a good enough approximation for the solution of the given system Ax = b. An operator that is used for this purpose is called a preconditioner for the matrix A.