We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An inverse problem assumes a direct problem that is a well-posed problem of mathematical physics. In other words, if we know completely a “physical device,” we have a classical mathematical description of this device including uniqueness, stability and existence of a solution of the corresponding mathematical problem.
Victor Isakov. Inverse Problems for Partial Differential Equations. Springer-Verlag (1998)
Image restoration is historically one of the oldest concerns in image processing and is still a necessary preprocessing step for many applications.
Gilles Aubert and Pierre Kornprobst. Mathematical Problems in Image Processing. Springer-Verlag (2002)
However, for the time being it is worthwhile recalling the remark of Lanczos: “A lack of information cannot be remedied by any mathematical trickery.” Hence in order to determine what we mean by a solution it will be necessary to introduce “nonstandard” information that reflects the physical situation we are trying to model.
David Colton and Rainer Kress. Integral Equation Methods in Scattering Theory. Wiley (1983)
The research of inverse problems has become increasingly popular for two reasons:
(i) there is an urgent need to understand these problems and find adequate solution methods; and
(ii) the underlying mathematics is intriguingly nonlinear and is naturally posed as a challenge to mathematicians and engineers alike.
It is customary for an introduction to inverse problems of boundary value problems to discuss the somewhat unhelpful terms of ‘ill-posed problems’ or ‘improperly-posed problems’.
Whenever both the multigrid method and the domain decomposition method work, the multigrid is faster.
Jinchao Xu, Lecture at University of Leicester EPSRC Numerical Analysis Summer School, UK (1998)
This paper provides an approach for developing completely parallel multilevel preconditioners.… The standard multigrid algorithms do not allow for completely parallel computations, since the computations on a given level use results from the previous levels.
James H. Bramble, et al. Parallel multilevel preconditioners. Mathematics of Computation, Vol. 55 (1990)
Multilevel methods [including multigrid methods and multilevel preconditioners] represent new directions in the recent research on domain decomposition methods … they have wide applications and are expected to dominate the main stream researches in scientific and engineering computing in 1990s.
Tao Lu, et al. Domain Decomposition Methods. Science Press, Beijing (1992)
The linear system (1.1) may represent the result of discretization of a continuous (operator) problem over the finest grid that corresponds to a user required resolution. To solve such a system or to find an efficient preconditioner for it, it can be advantageous to set up a sequence of coarser grids with which much efficiency can be gained.
This sequence of coarser grids can be nested with each grid contained in all finer grids (in the traditional geometry-based multigrid methods), or non-nested with each grid contained only in the finest grid (in the various variants of the domain decomposition methods), or dynamically determined from the linear system alone in a purely algebraic way (the recent algebraic multigrid methods).
As an example of what is often called a complex system, the power grid is made up of many components whose complex interactions are not effectively computable. Accordingly, some scientists have found it more useful to study the power grid's macroscopic behaviour than to dissect individual events.
Sara Robinson. The power grid as complex system. SIAM News, Vol. 36 (2003)
The electrical power network is a real life necessity all over the world; however, delivering the power supply stably while allowing various demand pattern changes and adjustments is an enormous challenge. Mathematically speaking, the beauty of such networks lies in their providing a challenging set of nonlinear differential-algebraic equations (DAEs) in the transient case and a set of nonlinear algebraic equations in the equilibrium case [438, 89, 292].
This chapter introduces the equilibrium equations, discusses some recent fast nonlinear methods for computing the fold bifurcation parameter and finally highlights the open challenge arisen from computing the Hopf bifurcation parameter, where one must solve a new system of size O(n2) for an original problem of size n. We shall consider the following.
Section 14.1 The model equations
Section 14.2 Fold bifurcation and arc-length continuation
Section 14.3 Hopf bifurcation and solutions
Section 14.4 Preconditioning issues
Section 14.5 Discussion of software and the supplied Mfiles
As we will see, iterative methods are not only great fun to play with and interesting objects for analysis, but they are really useful in many situations. For truly large problems they may sometimes offer the only way towards a solution.
Henk A. van der Vorst. Iterative Krylov Methods for Large Linear Systems. Cambridge University Press (2003)
A similar work [on the fast multipole method]was done in 3D by Rokhlin. As in 2D, the multistep algorithm was not properly explained.
Eric Darve. Fast Multipole Method, preprint, Paris, France (1997)
An iterative method for linear system Ax = b finds an infinite sequence of approximate solutions x(j) to the exact answer x*, each ideally with a decreased error, by using A repeatedly and without modifying it. The saving from using an iterative method lies in a hopefully early termination of the sequence as most practical applications are only interested in finding a solution x close enough to x*. Therefore, it almost goes without saying that the essence of an iterative method is fast convergence or at least convergence. When this is not possible for (1.1), we shall consider (1.2) with a suitable M.
This chapter will review a selection of iterative methods for later use as building blocks for preconditioner designs and testing.
In the last few years we have studied preconditioning techniques based on sparse approximate inverses and have found them to be quite effective.
B. Carpentieri, et al. SIAM Journal on Scientific Computing, Vol. 25 (2003)
The objective is to remove the smallest eigenvalues of A which are known to slow down the convergence of GMRES.
Jocelyne Erhel, et al. Journal of Computational and Applied Mathematics, Vol. 69 (1996)
The most successful preconditioning methods in terms of reducing the number of iterations, such as the incomplete LU decomposition or symmetric successive relaxation (SSOR), are notoriously difficult to implement in a parallel architecture, especially for unstructured matrices.
Marcus J. Grote and Thomas Huckle. SIAM Journal on Scientific Computing, Vol. 18 (1997)
This chapter will discuss the construction of Inverse Type preconditioners (or approximate inverse type) i.e. for equation (1.2)
MAx = Mb
and other types as shown on Page 3. Our first concern will be a theoretical one on characterizing A–1. It turns out that answering this concern reveals most underlying ideas of inverse type preconditioners. We shall present the following.
Section 5.1 How to characterize A–1 in terms of A
Section 5.2 Banded preconditioner
Section 5.3 Polynomial pk(A) preconditioners
Section 5.4 General and adaptive SPAI preconditioners
Section 5.5 AINV type preconditioner
Section 5.6 Multi-stage preconditioners
Section 5.7 The dual tolerance self-preconditioning method
There seem to be at least two important issues for which wavelet-like expansions have already proven to work with great success, namely preconditioning linear systems stemming from Galerkin approximations for elliptic problems and compressing full stiffness matrices arising in connection with integral or pseudodifferential operators, to facilitate nearly optimal complexity algorithms for the solution of the corresponding discrete problems.
Wolfgang Dahmen, et al. Multiscale methods for pseudodifferential equations. Recent Advances in Wavelet Analysis (1994)
In the usual FEM setting, Schur complement methods from Chapter 7 perform the best if there is some kind of ‘diagonal’ dominance. This chapter proposes two related and efficient iterative algorithms based on the wavelet formulation for solving an operator equation with conventional arithmetic. In the new wavelet setting, the stiffness matrix possesses the desirable properties suitable for using the Schur complements. The proposed algorithms utilize the Schur complements recursively; they only differ in how to use coarse levels to solve Schur complements equations. In the first algorithm, we precondition a Schur complement by using coarse levels while in the second we use approximate Schur complements to construct a preconditioner. We believe that our algorithms can be adapted to higher dimensional problems more easily than previous work in the subject.
A clustered [eigenvalue] spectrum often translates in rapid convergence of GMRES.
Michele Benzi and Gene H. Golub. SIAM Journal on Matrix Analysis and Applications, Vol. 24 (2003)
The coupled matrix problems represent a vast class of scientific problems arising from discretization of either systems of PDE's or coupled PDE's and integral equations, among other applications such as the Karush–Kuhn–Tucker (KKT) matrices from nonlinear programming [273, 43]. The reader may be aware of the fact that many coupled (nonlinear) systems may be solved by Uzawa type algorithms [92, 153, 144, 115], i.e. all equations are ‘artificially’ decoupled and solved in turns. A famous example of this strategy is the SIMPLE algorithm widely used in computational fluid dynamics along with finite volume discretization [473, 334]. While there is much to do in designing better and more robust preconditioners for a single system such as (1.1), one major challenge in future research will be to solve the coupled problems many of which have only been tackled recently.
This chapter will first review the recent development on a general coupled system and then discuss some specific coupled problems. The latter samples come from a large range of challenging problems including elasticity, particle physics and electromagnetism. We shall discuss the following.
How much of the matrix must be zero for it to be considered sparse depends on the computation to be performed, the pattern of the nonzeros, and even the architecture of the computer. Generally, we say that a matrix is sparse if there is an advantage in exploiting its zeros.
Iain Duff, et al. Direct Methods for Sparse Matrices. Clarendon Press (1986)
To be fair, the traditional classification of solution methods as being either direct or iterative methods is an oversimplification and is not a satisfactory description of the present state of affairs.
Michele Benzi. Journal of Computational Physics, Vol. 182 (2002)
A direct method for linear system Ax = b refers to any method that seeks the solution x, in a finite number of steps, by simplifying the general matrix A to some special and easily solvable form (1.3), e.g. a diagonal form or triangular form. In the absence of computer roundoff, x will be the exact answer x*; however unless symbolic computing is used, computer roundoff is present and hence conditioning of A will affect the quality of x. Often a direct method is synonymous with the Gaussian elimination method, which essentially simplifies A to a triangular form or equivalently decomposes matrix A into a product of triangular matrices. However one may also choose its closely related variants such as the Gauss–Jordan method, the Gauss–Huard method or the Purcell method especially when parallel methods are sought; refer to [143].
The subject of “wavelets” is expanding at such a tremendous rate that it is impossible to give, within these few pages, a complete introduction to all aspects of its theory.
Ronald A. Devore and Bradley J. Lucier. Wavelets. Acta Numerica (1992)
If A is a bounded operator with a bounded inverse, then A maps any orthogonal basis to a Riesz basis. Moreover, all Riesz bases can be obtained as such images of an orthogonal basis. In a way, Riesz bases are the next best thing to an orthogonal basis.
Ingrid Daubechies. Ten Lectures on Wavelets. SIAM Publications (1992)
The discovery of wavelets is usually described as one of the most important advances in mathematics in the twentieth century as a result of joint efforts of pure and applied mathematicians. Through the powerful compression property, wavelets have satisfactorily solved many important problems in applied mathematics, such as signal and image processing; see [269, 166, 441, 509] for a summary. There remain many mathematical problems to be tackled before wavelets can be used for solution of differential and integral equations in a general setting.
In this chapter, we aim to give an introduction to wavelet preconditioning and focus more on discrete wavelets. As far as the solution of operator equations is concerned, the construction of compactly supported and computable wavelet functions remains a challenge for the future. We discuss these issues in the following.
Potential fields play an important role in physics and geophysics because they describe the behavior of gravitational and electric fields as well as a number of other fields. Conversely, measurements of potential fields provide important information about the internal structure of bodies. For example, measurements of the electric potential at the Earth's surface when a current is sent into the Earth give information about the electrical conductivity while measurements of the Earth's gravity field or geoid provide information about the mass distribution within the Earth.
An example of this can be seen in Figure 21.1 in which the gravity anomaly over the northern part of the Yucatan peninsula in Mexico is shown [49]. The coast is visible as a thin white line. Note the ring structure that is visible in the gravity signal. These rings have led to the discovery of the Chicxulub crater which was caused by the massive impact of a meteorite. Note that the diameter of the impact crater is about 150 km! This crater is presently hidden by thick layers of sediments: at the surface the only apparent imprint of this crater is the presence of underground water-filled caves called “cenotes” at the outer edge of the crater. It was the measurement of the gravity field that made it possible to find this massive impact crater.
The equation that the gravitational or electric potential satisfies depends critically on the Laplacian of the potential.