To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Since a large class of physical problems is defined on bounded domains, we focus on integral equations on bounded domains. As we know, a bounded domain in ℝd may be well approximated by a polygonal domain, which is a union of simplexes, cubes and perhaps L-shaped domains. To develop fast Galerkin, Petrov–Galerkin and collocation methods for solving the integral equations, we need multi scale bases and collocation functionals on polygonal domains. Simplexes, cubes or L-shaped domains are typical examples of invariant sets. This chapter is devoted to a description of constructions of multi scale basis functions including multi scale orthogonal bases, interpolating bases and multi scale collocation functionals. The multi scale basis functions that we construct here are discontinuous piecewise polynomials. For this reason, we describe their construction on invariant sets which can turn to bases on a polygon.
To illustrate the idea of the construction, we start with examples on [0, 1], which is the simplest example of invariant sets. This will be done in Section 4.1. Constructions of multi scale basis functions and collocation functionals on invariant sets are based on self-similar partitions of the sets. Hence, we discuss such partitions in Section 4.2. Based on such self-similar partitions, we describe constructions of multi scale orthogonal bases in Section 4.3. For the construction of the multi scale interpolating basis, we require the availability of the multi scale interpolation points. Section 4.4 is devoted to the notion of refinable sets, which are a base for the construction of the multi scale interpolation points. Finally, in Section 4.5, we present the construction of multi scale interpolating bases.
Multi scale functions on the unit interval
This section serves as an illustration of the idea for the construction of orthogonal multi scale piecewise polynomial bases on an invariant set. We consider the simplest invariant set Ω ≔ [0, 1] in this section.
The goal of this chapter is to develop efficient solvers for the discrete linear systems resulting from discretization of the Fredholm integral equation of the second kind by using the multiscale methods discussed in previous chapters. We introduce the multilevel augmentation method (MAM) and the multilevel iteration method (MIM) for solving operator equations based on multilevel decompositions of the approximate subspaces. Reflecting the direct sum decompositions of the subspaces, the coefficient matrix of the linear system has a special structure. Specifically, the matrix corresponding to a finer level of approximate spaces is obtained by augmenting the matrix corresponding to a coarser level with submatrices that correspond to the difference spaces between the spaces of the finer level and the coarser level. The main idea is to split the matrix into a sum of two matrices, with one reflecting its lower frequency and the other reflecting its higher frequency. We are required to choose the splitting in such a way that the inverse of the lower-frequency matrix either has an explicit form or can easily be computed with a lower computational cost.
In this chapter we introduce the MAM and MIM and provide a complete analysis of their convergence and stability.
Multilevel augmentation methods
In this section, we describe a general setting of the MAM for solving operator equations. This method is based on a standard approximation method at a coarse level and updates the resulting approximate solutions by adding details corresponding to higher levels in a direct sum decomposition. We prove that this method provides the same order of convergence as the original approximation method.
Multilevel augmentation methods for solving operator equations
We begin with a description of the general setup for the operator equations under consideration.
In this chapter we develop multiscale methods for solving the Hammerstein equation, and the nonlinear boundary integral equation resulting from a reformulation of a boundary value problem of the Laplace equation with nonlinear boundary conditions. Fast algorithms are proposed using the MAM, in conjunction with matrix truncation strategies and techniques of numerical integration for integrals appearing in the process of solving equations. We prove that the proposed methods require only linear (up to a logarithmic factor) computational complexity and have the optimal convergence order.
In the section that follows we discuss the critical issues in solving nonlinear integral equations. This will shine a light on the ideas developed later in this chapter. In Section 10.2, we introduce the MAM for solving Hammerstein equations and provide a complete convergence analysis for the proposed method. In Section 10.3, we develop the MAM for solving the nonlinear boundary integral equation as a result of a reformulation of a boundary value problem of the Laplace equation with nonlinear boundary conditions. We present numerical experiments in Section 10.4.
Critical issues in solving nonlinear equations
Nonlinear integral equations portray many mathematical physics problems. The Hammerstein equation is a typical kind of nonlinear integral equation. Moreover, boundary value problems of the Laplace equation serve as mathematical models for many important applications. Making use of the fundamental solutions of the equation, we can formulate the boundary value problems as integral equations defined on the boundary (see, Section 2.2.3). For linear boundary conditions, the resulting boundary integral equations are linear, the numerical methods of which have been studied extensively. Nonlinear boundary conditions are also involved in various applications. In these cases, the reformulation of the corresponding boundary value problems leads to nonlinear integral equations.
The nonlinearity introduces difficulties in the numerical solution of the equation, which normally requires an iteration scheme to solve it locally as a linearized integral equation.
In this chapter we consider solving the eigen-problem of a weakly singular integral operator K. As we know, the spectrum of a compact integral operator K consists of a countable number of eigenvalues which only have an accumulation point at zero that may or may not be an eigenvalue. We explain in this chapter how the multiscale methods can be used to compute the nonzero eigenvalues of K rapidly and efficiently. We begin with a brief introduction to the subject.
Introduction
Many practical problems in science and engineering are formulated as eigenproblems of compact linear integral operators (cf. [44]). Standard numerical treatments of the eigen-problem normally discretize the compact integral operator into a matrix and then solve the eigen-problem of the resulting matrix. The computed eigenvalues and associated eigenvectors of the matrix are considered approximations of the corresponding eigenvalues and eigenvectors of the compact integral operator. In particular, the Galerkin, Petrov–Galerkin, collocation, Nyström and degenerate kernel methods are commonly used methods for the approximation of eigenvalues and eigenvectors of compact integral operators. It is well known that the matrix which results from a discretization of a compact integral operator is dense. Solving the eigenproblem of a dense matrix requires a significant amount of computational effort. Hence, fast algorithms for solving such a problem are highly desirable.
We are interested in developing a fast collocation method for solving the eigen-problem of a compact linear integral operator K on the L∞(Ω) space with a weakly singular kernel. Wavelet and multiscale methods were recently developed (see, for example, [28, 64, 67, 68, 95, 108, 202] and the references cited therein) for numerical solutions of weakly singular Fredholm integral equations of the second kind. Some of them were discussed in the previous chapter. The essence of these methods is to approximate the dense matrix that results from the discretization of the integral operator by a sparse matrix and solve the linear system of the sparse matrix.
Fredholm equations arise in many areas of science and engineering. Consequently, they occupy a central topic in applied mathematics. Traditional numerical methods developed during the period prior to the mid-1980s include mainly quadrature, collocation and Galerkin methods. Unfortunately, all of these approaches suffer from the fact that the resulting discretization matrices are dense. That is, they have a large number of nonzero entries. This bottleneck leads to significant computational costs for the solution of the corresponding integral equations.
The recent appearance of wavelets as a new computational tool in applied mathematics has given a new direction to the area of the numerical solution of Fredholm integral equations. Shortly after their introduction it was discovered that using a wavelet basis for a singular integral equation led to numerically sparse matrix discretization. This observation, combined with a truncation strategy, then led to a fast numerical solution of this class of integral equations.
Approximately 20 years ago the authors of this book began a systematic study of the construction of wavelet bases suitable for solving Fredholm integral equations and explored their usefulness for developing fast multi scale Galerkin, Petrov–Galerkin and collocation methods. The purpose of this book is to provide a self-contained account of these ideas as well as some traditional material on Fredholm equations to make this book accessible to as large an audience as possible.
The goal of this book is twofold. It can be used as a reference text for practitioners who need to solve integral equations numerically and wish to use the new techniques presented here. At the same time, portions of this book can be used as a modern text treating the subject of the numerical solution of integral equations, which is suitable for upper-level undergraduate students as well as graduate students. Specifically, the first five chapters of this book are designed for a one-semester course, which provides students with a solid background in integral equations and fast multi scale methods for their numerical solutions.
The equations we consider in this book are primarily Fredholm integral equations of the second kind on bounded domains in the Euclidean space. These equations are used as mathematical models for a multitude of physical problems and cover many important applications, such as radiosity equations for realistic image synthesis [18, 85, 244] and especially boundary integral equations [12, 177, 203], which themselves occur as reformulations of other problems, typically originating as partial differential equations. In practice, Fredholm integral equations are solved numerically using piecewise polynomial collocation or Galerkin methods, and when the order of the coefficient matrix (which is typically full) is large, the computational cost of generating the matrix as well as solving the corresponding linear system is large. Therefore, to enhance the range of applicability of the Fredholm equation methodology, it is critical to provide alternate algorithms which are fast, efficient and accurate. This book is concerned with this challenge: designing fast multi scale methods for the numerical solution of Fredholm integral equations.
The development and use of multi scale methods for solving integral equations is a subject of recent intense study. The history of fast multi scale solutions of integral equations began with the introduction of multi scale Galerkin (Petrov–Galerkin) methods for solving integral equations, as presented in [28, 64, 68, 88, 94, 95, 202, 260, 261] and the references cited therein. Most noteworthy is the discovery in [28] that the representation of a singular integral operator by compactly supported orthonormal wavelets produces numerically sparse matrices. In other words, most of their entries are so small in absolute value that, to some degree of precision, they can be neglected without affecting the overall accuracy of the approximation. Later, the papers [94, 95] studied Petrov–Galerkin methods using periodic multi scale bases constructed from refinement equations for periodic elliptic pseudodifferential equations, and in this restricted environment, stability, convergence and matrix compression were investigated.
The purpose of this chapter is to present a multiscale collocation method for solving Fredholm integral equations of the second kind with weakly singular kernels. Among conventional numerical methods for solving integral equations, the collocation method receives more favorable attention from engineering applications due to its lower computational cost in generating the coefficient matrix of the corresponding discrete equations. In comparison, the implementation of the Galerkin method requires much more computational effort for the evaluation of integrals (see, for example, [12, 19, 77]). Nonetheless, it seems that the most attention in multiscale and wavelet methods for boundary integral equations has been paid to Galerkin methods or Petrov–Galerkin methods (see [28, 64, 95] and the references cited therein). These methods are amenable to the L2 analysis and therefore the vanishing moments of the multiscale basis functions naturally lead to matrix truncation techniques. For collocation methods, the appropriate context to work in is the L∞ space, and this provides challenging technical obstacles for the identification of good matrix truncation strategies. Following [69], we present a construction of multiscale basis functions and the corresponding multiscale collocation functionals, both having vanishing moments. These basis functions and collocation functionals lead to a numerically sparse matrix presentation of the Fredholm integral operator. A proper truncation of such a numerically sparse matrix will result in a fast numerical algorithm for solving the equation, which preserves the optimal convergence order up to a logarithmic factor.
In Section 7.1, we describe multiscale basis functions and the corresponding functionals needed for developing the fast algorithm. Section 7.2 is devoted to a presentation of the multiscale collocation method. We analyze the proposed method in Section 7.3, giving estimates for the convergence order, computational costs and a bound of the condition number of the related coefficient matrix.
Multiscale basis functions and collocation functionals
Multiscale collocation methods require availability of multiscale basis functions and collocation functionals which have vanishing moments of certain degrees.
The recent appearance of wavelets as a new computational tool in applied mathematics has given a new impetus to the field of numerical analysis of Fredholm integral equations. This book gives an account of the state of the art in the study of fast multiscale methods for solving these equations based on wavelets. The authors begin by introducing essential concepts and describing conventional numerical methods. They then develop fast algorithms and apply these to solving linear, nonlinear Fredholm integral equations of the second kind, ill-posed integral equations of the first kind and eigen-problems of compact integral operators. Theorems of functional analysis used throughout the book are summarised in the appendix. The book is an essential reference for practitioners wishing to use the new techniques. It may also be used as a text, with the first five chapters forming the basis of a one-semester course for advanced undergraduates or beginning graduates.
We present the results of computer experiments suggesting that the probability that a random multiword in a free group is virtually geometric decays to zero exponentially quickly in the length of the multiword. We also prove this fact.