We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is targeted primarily toward engineers and engineering students of advanced standing (sophomores, seniors and graduate students). Familiarity with a computer language is required; knowledge of basic engineering mechanics is useful, but not essential.
The text attempts to place emphasis on numerical methods, not programming. Most engineers are not programmers, but problem solvers. They want to know what methods can be applied to a given problem, what are their strengths and pitfalls and how to implement them. Engineers are not expected to write computer code for basic tasks from scratch; they are more likely to utilize functions and subroutines that have been already written and tested. Thus programming by engineers is largely confined to assembling existing pieces of code into a coherent package that solves the problem at hand.
The “piece” of code is usually a function that implements a specific task. For the user the details of the code are unimportant. What matters is the interface (what goes in and what comes out) and an understanding of the method on which the algorithm is based. Since no numerical algorithm is infallible, the importance of understanding the underlying method cannot be overemphasized; it is, in fact, the rationale behind learning numerical methods.
This book attempts to conform to the views outlined above. Each numerical method is explained in detail and its shortcomings are pointed out.
Information in science and mathematics is often organized into rows and columns to form rectangular arrays, called “matrices” (plural of “matrix“). Matrices are often tables of numerical data that arise from physical observations, but they also occur in various mathematical contexts.
Howard Anton. Elementary Linear Algebra. Wiley (1973 1st edn, 2000 8th edn).
To be able to read or work on matrix computing, a reader must have completed a course on linear algebra. The inclusion of this Appendix A is to review some selected topics from basic linear algebra for reference purposes.
The term “preconditioning” appears to have been used for the first time in 1948 by Turing [461], … The first use of the term in connection with iterative methods is found in a paper by Evans [200] … in 1968.
Michele Benzi. Journal of Computational Physics, Vol. 182 (2002)
For such problems, the coefficient matrix A is often highly nonsymmetric and non-diagonally dominant and hence many classical preconditioning techniques are not effective. For these problems, the circulant preconditioners are often the only ones that work.
Raymond Chan and Tony Chan. Journal of Numerical Linear Algebra and Applications, Vol. 1 (1992)
In ending this book with the subject of preconditioners, we find ourselves at the philosophical center of the scientific computing of the future … Nothing will be more central to computational science in the next century than the art of transforming a problem that appears intractable into another whose solution can be approximated rapidly. For Krylov subspace matrix iterations, this is preconditioning.
Lloyd Nicholas Trefethen and David Bau III. Numerical Linear Algebra. SIAM Publications (1997)
Starting from this chapter, we shall first describe various preconditioning techniques that are based on manipulation of a given matrix. These are classified into four categories: direct matrix extraction (or operator splitting type), inverse approximation (or inverse operator splitting type), multilevel Schur complements and multi-level operator splitting (multilevel methods).
An important class of problems in which significantly higher accuracies are needed relate to low-observable applications, where the quantities of interest are small residuals of large incident fields.
Oscar P. Bruno. Fast, high-order, high-frequency integral methods for computational acoustics and electromagnetics. Lecture Notes in Computational Science and Engineering 31. Springer-Verlag (2003)
However, wavelet representation of an oscillating matrix appears to be as dense as the original, i.e. oscillatory kernels cannot be handled efficiently by representing them in wavelet bases.
A. Averbuchet al. On efficient computation of multidimensional oscillatory integrals with local Fourier bases. Nonlinear Analysis (2001)
The acoustic scattering modelling provides a typical example of utilizing a boundary element method to derive a dense matrix application as shown in Chapter 1. Such a physical problem is only a simple model of the full wave equations or the Maxell equations from electromagnetism. The challenges are:
(i) the underlying system is dense and non-Hermitian;
(ii) the kernel of a boundary integral operator is highly oscillatory for high wavenumbers, implying that a large linear system must be solved. The oscillation means that the fast multipole method and the fast wavelet methods are not immediately applicable.
This chapter reviews the recent work on using preconditioned iterative solvers for such linear systems arising from acoustic scattering modelling and points out the various challenges for future research work. We consider the following.
The experiences of Fox, Huskey, and Wilkinson [from solving systems of orders up to 20] prompted Turing to write a remarkable paper [in 1948] … In this paper, Turing made several important contributions … He used the word “preconditioning” to mean improving the condition of a system of linear equations (a term that did not come into popular use until 1970s).
Nicholas J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM Publications (1996)
Matrix computing arises in the solution of almost all linear and nonlinear systems of equations. As the computer power upsurges and high resolution simulations are attempted, a method can reach its applicability limits quickly and hence there is a constant demand for new and fast matrix solvers. Preconditioning is the key to a successful iterative solver. It is the intention of this book to present a comprehensive exposition of the many useful preconditioning techniques.
Preconditioning equations mainly serve for an iterative method and are often solved by a direct solver (occasionally by another iterative solver). Therefore it is inevitable to address direct solution techniques for both sparse and dense matrices. While fast solvers are frequently associated with iterative solvers, for special problems, a direct solver can be competitive. Moreover, there are situations where preconditioning is also needed for a direct solution method. This clearly demonstrates the close relationship between a direct and an iterative method.
Parallelism has sometimes been viewed as a rare and exotic subarea of computing, interesting but of little relevance to the average programmer. A study of trends in applications, computer architecture, and networking shows that this view is no longer tenable. Parallelism is becoming ubiquitous, and parallel computing is becoming central to the programming enterprise.
Ian Foster. Designing and Building Parallel Programs. Addison-Wesley (1995)
I rather kill myself than debug a MPI program.
Anonymous
Parallel computing represents a major research direction for the future and offers the best and often the only solution to large-scale computational problems in today's technology. A book on fast solvers is incomplete without a discussion of this important topic. However, any incomplete description of the subject is of no use and there are already too many books available. Nevertheless, the author believes that too much emphasis has been put from a computer scientist's view (on parallelization) so that a beginner may feel either confused with various warnings and jargons of new phrases or intimated by the complexity of some published programs (algorithms) of well-known methods. Hence we choose to give complete details for a few selected examples that fall into the category of ‘embarrassingly parallelizable’ methods.
Therefore the purpose of this chapter is to convey two simple messages.