To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The term “preconditioning” appears to have been used for the first time in 1948 by Turing [461], … The first use of the term in connection with iterative methods is found in a paper by Evans [200] … in 1968.
Michele Benzi. Journal of Computational Physics, Vol. 182 (2002)
For such problems, the coefficient matrix A is often highly nonsymmetric and non-diagonally dominant and hence many classical preconditioning techniques are not effective. For these problems, the circulant preconditioners are often the only ones that work.
Raymond Chan and Tony Chan. Journal of Numerical Linear Algebra and Applications, Vol. 1 (1992)
In ending this book with the subject of preconditioners, we find ourselves at the philosophical center of the scientific computing of the future … Nothing will be more central to computational science in the next century than the art of transforming a problem that appears intractable into another whose solution can be approximated rapidly. For Krylov subspace matrix iterations, this is preconditioning.
Lloyd Nicholas Trefethen and David Bau III. Numerical Linear Algebra. SIAM Publications (1997)
Starting from this chapter, we shall first describe various preconditioning techniques that are based on manipulation of a given matrix. These are classified into four categories: direct matrix extraction (or operator splitting type), inverse approximation (or inverse operator splitting type), multilevel Schur complements and multi-level operator splitting (multilevel methods).
An important class of problems in which significantly higher accuracies are needed relate to low-observable applications, where the quantities of interest are small residuals of large incident fields.
Oscar P. Bruno. Fast, high-order, high-frequency integral methods for computational acoustics and electromagnetics. Lecture Notes in Computational Science and Engineering 31. Springer-Verlag (2003)
However, wavelet representation of an oscillating matrix appears to be as dense as the original, i.e. oscillatory kernels cannot be handled efficiently by representing them in wavelet bases.
A. Averbuchet al. On efficient computation of multidimensional oscillatory integrals with local Fourier bases. Nonlinear Analysis (2001)
The acoustic scattering modelling provides a typical example of utilizing a boundary element method to derive a dense matrix application as shown in Chapter 1. Such a physical problem is only a simple model of the full wave equations or the Maxell equations from electromagnetism. The challenges are:
(i) the underlying system is dense and non-Hermitian;
(ii) the kernel of a boundary integral operator is highly oscillatory for high wavenumbers, implying that a large linear system must be solved. The oscillation means that the fast multipole method and the fast wavelet methods are not immediately applicable.
This chapter reviews the recent work on using preconditioned iterative solvers for such linear systems arising from acoustic scattering modelling and points out the various challenges for future research work. We consider the following.
The experiences of Fox, Huskey, and Wilkinson [from solving systems of orders up to 20] prompted Turing to write a remarkable paper [in 1948] … In this paper, Turing made several important contributions … He used the word “preconditioning” to mean improving the condition of a system of linear equations (a term that did not come into popular use until 1970s).
Nicholas J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM Publications (1996)
Matrix computing arises in the solution of almost all linear and nonlinear systems of equations. As the computer power upsurges and high resolution simulations are attempted, a method can reach its applicability limits quickly and hence there is a constant demand for new and fast matrix solvers. Preconditioning is the key to a successful iterative solver. It is the intention of this book to present a comprehensive exposition of the many useful preconditioning techniques.
Preconditioning equations mainly serve for an iterative method and are often solved by a direct solver (occasionally by another iterative solver). Therefore it is inevitable to address direct solution techniques for both sparse and dense matrices. While fast solvers are frequently associated with iterative solvers, for special problems, a direct solver can be competitive. Moreover, there are situations where preconditioning is also needed for a direct solution method. This clearly demonstrates the close relationship between a direct and an iterative method.
Parallelism has sometimes been viewed as a rare and exotic subarea of computing, interesting but of little relevance to the average programmer. A study of trends in applications, computer architecture, and networking shows that this view is no longer tenable. Parallelism is becoming ubiquitous, and parallel computing is becoming central to the programming enterprise.
Ian Foster. Designing and Building Parallel Programs. Addison-Wesley (1995)
I rather kill myself than debug a MPI program.
Anonymous
Parallel computing represents a major research direction for the future and offers the best and often the only solution to large-scale computational problems in today's technology. A book on fast solvers is incomplete without a discussion of this important topic. However, any incomplete description of the subject is of no use and there are already too many books available. Nevertheless, the author believes that too much emphasis has been put from a computer scientist's view (on parallelization) so that a beginner may feel either confused with various warnings and jargons of new phrases or intimated by the complexity of some published programs (algorithms) of well-known methods. Hence we choose to give complete details for a few selected examples that fall into the category of ‘embarrassingly parallelizable’ methods.
Therefore the purpose of this chapter is to convey two simple messages.
An inverse problem assumes a direct problem that is a well-posed problem of mathematical physics. In other words, if we know completely a “physical device,” we have a classical mathematical description of this device including uniqueness, stability and existence of a solution of the corresponding mathematical problem.
Victor Isakov. Inverse Problems for Partial Differential Equations. Springer-Verlag (1998)
Image restoration is historically one of the oldest concerns in image processing and is still a necessary preprocessing step for many applications.
Gilles Aubert and Pierre Kornprobst. Mathematical Problems in Image Processing. Springer-Verlag (2002)
However, for the time being it is worthwhile recalling the remark of Lanczos: “A lack of information cannot be remedied by any mathematical trickery.” Hence in order to determine what we mean by a solution it will be necessary to introduce “nonstandard” information that reflects the physical situation we are trying to model.
David Colton and Rainer Kress. Integral Equation Methods in Scattering Theory. Wiley (1983)
The research of inverse problems has become increasingly popular for two reasons:
(i) there is an urgent need to understand these problems and find adequate solution methods; and
(ii) the underlying mathematics is intriguingly nonlinear and is naturally posed as a challenge to mathematicians and engineers alike.
It is customary for an introduction to inverse problems of boundary value problems to discuss the somewhat unhelpful terms of ‘ill-posed problems’ or ‘improperly-posed problems’.
Whenever both the multigrid method and the domain decomposition method work, the multigrid is faster.
Jinchao Xu, Lecture at University of Leicester EPSRC Numerical Analysis Summer School, UK (1998)
This paper provides an approach for developing completely parallel multilevel preconditioners.… The standard multigrid algorithms do not allow for completely parallel computations, since the computations on a given level use results from the previous levels.
James H. Bramble, et al. Parallel multilevel preconditioners. Mathematics of Computation, Vol. 55 (1990)
Multilevel methods [including multigrid methods and multilevel preconditioners] represent new directions in the recent research on domain decomposition methods … they have wide applications and are expected to dominate the main stream researches in scientific and engineering computing in 1990s.
Tao Lu, et al. Domain Decomposition Methods. Science Press, Beijing (1992)
The linear system (1.1) may represent the result of discretization of a continuous (operator) problem over the finest grid that corresponds to a user required resolution. To solve such a system or to find an efficient preconditioner for it, it can be advantageous to set up a sequence of coarser grids with which much efficiency can be gained.
This sequence of coarser grids can be nested with each grid contained in all finer grids (in the traditional geometry-based multigrid methods), or non-nested with each grid contained only in the finest grid (in the various variants of the domain decomposition methods), or dynamically determined from the linear system alone in a purely algebraic way (the recent algebraic multigrid methods).
As an example of what is often called a complex system, the power grid is made up of many components whose complex interactions are not effectively computable. Accordingly, some scientists have found it more useful to study the power grid's macroscopic behaviour than to dissect individual events.
Sara Robinson. The power grid as complex system. SIAM News, Vol. 36 (2003)
The electrical power network is a real life necessity all over the world; however, delivering the power supply stably while allowing various demand pattern changes and adjustments is an enormous challenge. Mathematically speaking, the beauty of such networks lies in their providing a challenging set of nonlinear differential-algebraic equations (DAEs) in the transient case and a set of nonlinear algebraic equations in the equilibrium case [438, 89, 292].
This chapter introduces the equilibrium equations, discusses some recent fast nonlinear methods for computing the fold bifurcation parameter and finally highlights the open challenge arisen from computing the Hopf bifurcation parameter, where one must solve a new system of size O(n2) for an original problem of size n. We shall consider the following.
Section 14.1 The model equations
Section 14.2 Fold bifurcation and arc-length continuation
Section 14.3 Hopf bifurcation and solutions
Section 14.4 Preconditioning issues
Section 14.5 Discussion of software and the supplied Mfiles
As we will see, iterative methods are not only great fun to play with and interesting objects for analysis, but they are really useful in many situations. For truly large problems they may sometimes offer the only way towards a solution.
Henk A. van der Vorst. Iterative Krylov Methods for Large Linear Systems. Cambridge University Press (2003)
A similar work [on the fast multipole method]was done in 3D by Rokhlin. As in 2D, the multistep algorithm was not properly explained.
Eric Darve. Fast Multipole Method, preprint, Paris, France (1997)
An iterative method for linear system Ax = b finds an infinite sequence of approximate solutions x(j) to the exact answer x*, each ideally with a decreased error, by using A repeatedly and without modifying it. The saving from using an iterative method lies in a hopefully early termination of the sequence as most practical applications are only interested in finding a solution x close enough to x*. Therefore, it almost goes without saying that the essence of an iterative method is fast convergence or at least convergence. When this is not possible for (1.1), we shall consider (1.2) with a suitable M.
This chapter will review a selection of iterative methods for later use as building blocks for preconditioner designs and testing.
In the last few years we have studied preconditioning techniques based on sparse approximate inverses and have found them to be quite effective.
B. Carpentieri, et al. SIAM Journal on Scientific Computing, Vol. 25 (2003)
The objective is to remove the smallest eigenvalues of A which are known to slow down the convergence of GMRES.
Jocelyne Erhel, et al. Journal of Computational and Applied Mathematics, Vol. 69 (1996)
The most successful preconditioning methods in terms of reducing the number of iterations, such as the incomplete LU decomposition or symmetric successive relaxation (SSOR), are notoriously difficult to implement in a parallel architecture, especially for unstructured matrices.
Marcus J. Grote and Thomas Huckle. SIAM Journal on Scientific Computing, Vol. 18 (1997)
This chapter will discuss the construction of Inverse Type preconditioners (or approximate inverse type) i.e. for equation (1.2)
MAx = Mb
and other types as shown on Page 3. Our first concern will be a theoretical one on characterizing A–1. It turns out that answering this concern reveals most underlying ideas of inverse type preconditioners. We shall present the following.
Section 5.1 How to characterize A–1 in terms of A
Section 5.2 Banded preconditioner
Section 5.3 Polynomial pk(A) preconditioners
Section 5.4 General and adaptive SPAI preconditioners
Section 5.5 AINV type preconditioner
Section 5.6 Multi-stage preconditioners
Section 5.7 The dual tolerance self-preconditioning method
There seem to be at least two important issues for which wavelet-like expansions have already proven to work with great success, namely preconditioning linear systems stemming from Galerkin approximations for elliptic problems and compressing full stiffness matrices arising in connection with integral or pseudodifferential operators, to facilitate nearly optimal complexity algorithms for the solution of the corresponding discrete problems.
Wolfgang Dahmen, et al. Multiscale methods for pseudodifferential equations. Recent Advances in Wavelet Analysis (1994)
In the usual FEM setting, Schur complement methods from Chapter 7 perform the best if there is some kind of ‘diagonal’ dominance. This chapter proposes two related and efficient iterative algorithms based on the wavelet formulation for solving an operator equation with conventional arithmetic. In the new wavelet setting, the stiffness matrix possesses the desirable properties suitable for using the Schur complements. The proposed algorithms utilize the Schur complements recursively; they only differ in how to use coarse levels to solve Schur complements equations. In the first algorithm, we precondition a Schur complement by using coarse levels while in the second we use approximate Schur complements to construct a preconditioner. We believe that our algorithms can be adapted to higher dimensional problems more easily than previous work in the subject.
A clustered [eigenvalue] spectrum often translates in rapid convergence of GMRES.
Michele Benzi and Gene H. Golub. SIAM Journal on Matrix Analysis and Applications, Vol. 24 (2003)
The coupled matrix problems represent a vast class of scientific problems arising from discretization of either systems of PDE's or coupled PDE's and integral equations, among other applications such as the Karush–Kuhn–Tucker (KKT) matrices from nonlinear programming [273, 43]. The reader may be aware of the fact that many coupled (nonlinear) systems may be solved by Uzawa type algorithms [92, 153, 144, 115], i.e. all equations are ‘artificially’ decoupled and solved in turns. A famous example of this strategy is the SIMPLE algorithm widely used in computational fluid dynamics along with finite volume discretization [473, 334]. While there is much to do in designing better and more robust preconditioners for a single system such as (1.1), one major challenge in future research will be to solve the coupled problems many of which have only been tackled recently.
This chapter will first review the recent development on a general coupled system and then discuss some specific coupled problems. The latter samples come from a large range of challenging problems including elasticity, particle physics and electromagnetism. We shall discuss the following.