To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we extend the treatment we gave to power series in the preceding chapter to Fourier series and their generalizations, whether convergent or divergent. In particular, we are concerned with Fourier cosine and sine series, orthogonal polynomial expansions, series that arise from Sturm—Liouville problems, such as Fourier—Bessel series, and other general special function series.
Several convergence acceleration methods have been used on such series, with limited success. An immediate problem many of these methods face is that they do not produce any acceleration when applied to Fourier and generalized Fourier series. The transformations of Euler and of Shanks discussed in the following chapters and the d-transformation are exceptions. See the review paper by Smith and Ford [318] and the paper by Levin and Sidi [165]. With those methods that do produce acceleration, another problem one faces in working with such series is the lack of stability and acceleration near points of singularity of the functions that serve as limits or antilimits of these series. Recall that the same problem occurs in dealing with power series.
In this chapter, we show how the d-transformation can be used effectively to accelerate the convergence of these series. The approach we are about to propose has two main ingredients that can be applied also with some of the other sequence transformations.
An important problem that arises in many scientific and engineering applications is that of finding or approximating limits of infinite sequences {Am}. The elements Am of such sequences can show up in the form of partial sums of infinite series, approximations from fixed-point iterations of linear and nonlinear systems of equations, numerical quadrature approximations to finite- or infinite-range integrals, whether simple or multiple, etc. In most applications, these sequences converge very slowly, and this makes their direct use to approximate limits an expensive proposition. There are important applications in which they may even diverge. In such cases, the direct use of the Am to approximate their so-called “antilimits” would be impossible. (Antilimits can be interpreted in appropriate ways depending on the nature of {Am}. In some cases they correspond to analytic continuation in some parameter, for example.)
An effective remedy for these problems is via application of extrapolation methods (or convergence acceleration methods) to the given sequences. (In the context of infinite sequences, extrapolation methods are also referred to as sequence transformations.) Loosely speaking, an extrapolation method takes a finite and hopefully small number of the Am and processes them in some way. A good method is generally nonlinear in the Am and takes into account, either explicitly or implicitly, their asymptotic behavior as m → ∞ in a clever fashion.
In this book I present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods and they include popular methods such as Conjugate Gradients, MINRES, SYMMLQ, Bi-Conjugate Gradients, QMR, Bi-CGSTAB, CGS, LSQR, and GMRES. I will show how these methods can be derived from simple basic iteration formulas and how they are related. My focus is on the ideas behind the derivation of these methods, rather than on a complete presentation of various aspects and theoretical properties.
In the text there are a large number of references for more detailed information. Iterative methods form a rich and lively area of research and it is not surprising that this has already led to a number of books. The first book devoted entirely to the subject was published by Varga [212], it contains much of the theory that is still relevant, but it does not deal with the Krylov subspace methods (which were not yet popular at the time).
Other books that should be mentioned in the context of Krylov subspace methods are the ‘Templates’ book [20] and Greenbaum's book [101]. The Templates are a good source of information on the algorithmic aspects of the iterative methods and Greenbaum's text can be seen as the theoretical background for the Templates.
Axelsson [10] published a book that gave much attention to preconditioning aspects, in particular all sorts of variants of (block and modified) incomplete decompositions. The book by Saad [168] is also a good source of information on preconditioners, with much inside experience for such methods as threshold ILU.
In 1991 I was invited by Philippe Toint to give a presentation, on Conjugate Gradients and related iterative methods, at the university of Namur (Belgium). I had prepared a few-hand-written notes to guide myself through an old-fashioned presentation with blackboard and chalk. Some listeners asked for a copy of the notes and afterwards I heard from Philippe that they had been quite instructive for his students. This motivated me to work them out in LATEX and that led to the first seven or so pages of my lecture notes. I took the habit of expanding them before and after new lectures and after I had read new interesting aspects of iterative methods. Around 1995 I put the then about thirty pages on my website. They turned out to be quite popular and I received many suggestions for improvement and expansion, most of them by e-mail from various people: novices in the area, students, experts in this field, and users from other fields and industry.
For instance, research groups at Philips Eindhoven used the text for their understanding of iterative methods and they sometimes asked me to comment on certain novel ideas that they had heard of at conferences or picked up from literature. This led, amongst others, to sections on GPBi-CG, and symmetric complex systems. Discussions with colleagues about new developments inspired me to comment on these in my Lecture Notes and so I wrote sections on Simple GMRES and on the superlinear convergence of Conjugate Gradients.
A couple of years ago, I started to use these Lecture Notes as material for undergraduate teaching in Utrecht and I found it helpful to include some exercises in the text.