To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we discuss several aspects related to interpolation. In the first section, we derive some simple interpolation properties that can be easily obtained from the properties of the functions of the second kind that were studied earlier. It also turns out that interpolation of the positive real function Ωμ, whose Riesz–Herglotz–Nevanlinna measure μ is the measure that we used for the inner product, will imply that in Ln the measure can be replaced by the rational Riesz–Herglotz–Nevanlinna measure for the interpolant without changing the inner product. Some general theorems in this connection will be proved in Section 6.2. This will be important for the constructive proof of the Favard theorems to be discussed in Chapter 8. We then resume the interpolation results that can be obtained with the reproducing kernels and some functions that are in a sense reproducing kernels of the second kind.
We then show the connection with the algorithm of Nevanlinna–Pick in Section 6.4. This algorithm provides an alternative way to find the coefficients for the recurrence of the reproducing kernels that we gave in Section 3.2, without explicitly generating the kernels themselves. If all the interpolation points are at the origin, then the algorithm reduces to the Schur algorithm. It was designed originally to check whether a given function is in the Schur class. It basically generates a sequence of Schur functions by Möbius transforms and extractions of zeros.
In this chapter we shall collect the necessary preliminaries from complex analysis that we shall use frequently. Most of these results are classical and we shall give them mostly without proof.
We start with some elements from Hardy functions in the disk and the half plane in Section 1.1.
The important classes of analytic functions in the unit disk and half plane and with positive real part are called positive real for short and are often named after Carathéodory. By a Cayley transform, they can be mapped onto the class of analytic functions of the disk or half plane, bounded by one. This is the so-called Schur class. These classes are briefly discussed in Section 1.2.
Inner–outer factorizations and spectral factors are discussed in Section 1.3.
The reproducing kernels are, since the work of Szegő, intimately related to the theory of orthogonal polynomials and they will be even more important for the case of orthogonal rational functions. Some of their elementary properties will be recalled in Section 1.4.
The 2 × 2 J-unitary and J-contractive matrix functions with entries in the Nevanlinna class will be important when we develop the recurrence relations for the kernels and the orthogonal rational functions. Some of their properties are introduced in Section 1.5.
Hardy classes
We shall be concerned with complex function theory on the unit circle and the upper half plane. The complex number field is denoted by C.
One might expect that, if there exist Szegő style recurrence relations for the reproducing kernels, then it should be possible to derive recurrence relations in the Szegő style for the orthogonal functions themselves. However, deriving these is not as simple as for the reproducing kernels, in the sense that the transition matrices that give the recurrence are not precisely J-unitary, but it is still possible to get some recurrences that coincide with the Szegő recurrence in the polynomial case. This will be done in the first section of this chapter.
Related to this recurrence are the so-called functions of the second kind. They are also solutions of the same recurrence but with different initial conditions. Because they will be important in obtaining several interpolation properties, we shall study them in some detail in Section 4.2.
General solutions of the recurrence, which are linear combinations of first and second kind functions, are then treated in Section 4.3 and we include there an analog of Green's formula. The latter will be used in Chapter 10 on moment problems.
Since the convergents of a continued fraction are linked by a three-term recurrence relation, there is a natural link between three-term recurrence relations and continued fractions. This is explained in Section 4.4.
Finally, Section 4.5 gives some remarks about the situation when not all the points αk are in O, but when they are arbitrarily distributed in O \ ∂O. The case where they are all on ∂O is discussed in Chapter 11.
The material of earlier chapters was illustrated with integral equations for functions of a single variable. In this chapter we develop interpolation and numerical integration tools and use them with the projection and Nyström methods, developed in Chapters 3 and 4, to solve multivariable integral equations. Our principal interest will be the solution of integral equations defined on surfaces in R3, with an eye towards solving boundary integral equations on such surfaces. The solution of boundary integral equations on piecewise smooth surfaces is taken up in Chapter 9.
In §5.1 we develop interpolation and numerical integration formulas for multivariable problems, and these are applied to integral equations over planar regions in §5.2. The interpolation and integration results of §5.1 are extended to surface problems in §5.3. Methods for the numerical solution of integral equations on surfaces are given in §§5.4 and 5.5.
Multivariable interpolation and numerical integration
Interpolation for functions of more than one variable is a large topic with applications to many areas. In this section we consider only those aspects of the subject that we need for our work in the numerical solution of integral equations. To simplify the notation and to make more intuitive the development, we consider only functions of two variables. Generalizations to functions in more than two variables should be fairly straightforward for the reader.
Applications of multivariable interpolation are generally based on first breaking up a large planar region R into smaller ones of an especially simple form, and then polynomial interpolation is carried out over these smaller regions.
The study of boundary integral equation reformulations of Laplace's equation in three dimensions is quite an old one, with the names of many well-known physicists, engineers, and mathematicians associated with it. The development of practical numerical methods for the solution of such boundary integral equations lagged behind and is of more recent vintage, with most of it dating from the mid-1960s. In the 1980s there was an increased interest in the numerical analysis of such equations, and it has been quite an active area of research in the 1990s.
These boundary integral equations are defined on surfaces in space, and there is a far greater variety to such surfaces than is true of boundaries for planar problems. The surfaces may be either smooth or piecewise smooth, and when only piecewise smooth, there is a large variation as to the structure of edges and vertices present on the surface. In addition, most numerical methods require approximations of a piecewise polynomial nature over triangulations of the surface, in the manner developed in Chapter 5. These numerical methods lead to the need to solve very large linear systems, and until recently most computers had great difficulty in handling such problems. The practical aspects of setting up and solving such linear systems are more onerous for boundary integral equations in three dimensions, and this means that the numerical analysis problems of concern are often of a different nature than for planar boundary integral equations.
All the numerical methods of the preceding chapters involved the solution of systems of linear equations. When these systems are not too large, they can be solved by Gaussian elimination; for such systems, that is usually the simplest and most efficient approach to use. For larger linear systems, however, iteration is usually more efficient, and it is often the only practical means of solution. There is a large literature on general iteration methods for solving linear systems, but many of these general methods are often not efficient (or possibly, not even convergent) when used to solve the linear systems we have seen in the preceding chapters. In this chapter we define and analyze several iteration methods that seem especially suitable for solving the linear systems associated with the numerical solution of integral equations.
In §6.1 we give an iteration method for solving degenerate kernel integral equations and the associated linear systems. In §6.2 we define and analyze two-grid iteration methods for solving the systems associated with the Nyström method. And in §6.3 we consider related two-grid methods for projection methods. In our experience these are the most efficient numerical methods for solving the linear systems obtained when solving integral equations of the second kind. In §6.4 we define multigrid iteration methods, which are closely related to two-grid methods. Multigrid methods are among the most efficient methods for solving the linear systems associated with the numerical solution of elliptic partial differential equations, and they are also very efficient when solving Fredholm integral equations.