To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter and the next we discuss spline spaces defined on triangulations of the unit sphere S in ℝ3. The spaces are natural analogs of the bivariate spline spaces discussed earlier in this book, and are made up of pieces of trivariate homogeneous polynomials restricted to S. Thus, they are piecewise spherical harmonics. As we shall see, virtually the entire theory of bivariate polynomial splines on planar triangulations carries over, although there are several significant differences. This chapter is devoted to the basic theory of spherical splines. Approximation properties of spherical splines are treated in the following chapter.
Spherical Polynomials
In this section we introduce the key building blocks for spherical splines. Throughout the chapter we write ν for a point on the unit sphere S in ℝ3. When there is no chance of confusion, at times we will also write v for the corresponding unit vector. Before introducing spherical polynomials, we need to discuss spherical triangles and spherical barycentric coordinates.
Spherical Triangles
Suppose ν1, ν2 are two points on the sphere which are not antipodal, i.e., they do not lie on a line through the origin. Then the points ν1, ν2 divide the great circle passing through ν1, ν2 into two circular arcs. We write 〈ν1, ν2〉 for the shorter of the arcs. Its length is just the geodesic distance between ν1 and ν2.
Definition 13.1.Suppose ν1, ν2, ν3are three points on the unit sphere S which lie strictly in one hemisphere.
The theory of univariate splines began its rapid development in the early sixties, resulting in several thousand research papers and a number of books. This development was largely over by 1980, and the bulk of what is known today was treated in the classic monographs of deBoor [Boo78] and Schumaker [Sch81]. Univariate splines have become an essential tool in a wide variety of application areas, and are by now a standard topic in numerical analysis books.
If 1960–1980 was the age of univariate splines, then the next twenty years can be regarded as the age of multivariate splines. Prior to 1980 there were some results for tensor-product splines, and engineers were using piecewise polynomials in two and three variables in the finite element method, but multivariate splines had attracted relatively little attention. Now we have an estimated 1500 papers on the subject.
The purpose of this book is to provide a comprehensive treatment of the theory of bivariate and trivariate polynomial splines defined on triangulations and tetrahedral partitions. We have been working on this book for more than ten years, and initially planned to include details on some of the most important applications, including for example CAGD, data fitting, surface compression, and numerical solution of partitial differential equations. But to keep the size of the book manageable, we have reluctantly decided to leave applications for another monograph.
For us, a multivariate spline is a function which is made up of pieces of polynomials defined on some partition Δ of a set Ω, and joined together to ensure some degree of global smoothness.
The discretization of boundary-value problems leads to very large systems of equations which often involve several thousand unknowns. The systems are particularly large for three-dimensional problems and for problems of higher order. Often the bandwidth of the matrices is so large that the classical Gauss elimination algorithm and its modern variants are not efficient methods. This suggests that even for linear problems, we should use iterative methods.
Iterative methods first became popular at the end of the fifties, primarily as a means for solving large problems using computers with a small memory. The methods developed then are no longer competitive, but they still provide useful ingredients for modern iterative methods, and so we review them in §1. The bulk of this chapter is devoted to the conjugate gradient method which is particularly useful for the solution of variational problems and saddle point problems. Since the CG methods discussed here can be applied to a broad spectrum of problems, they are competitive with the still faster multigrid methods to be discussed later (whose implementation is generally more complicated and requires more individual programming).
We begin by classifying problems according to the number n of unknowns:
Small problems: For linear problems we can use a direct method. For nonlinear problems (e.g., using the Newton method), all elements of the Jacobi matrices should be computed (at least approximately).
Midsized problems: If the matrices are sparse, we should make use of this fact. For nonlinear problems (e.g., for quasi-Newton methods), the Jacobi matrices should be approximated. Iterative methods can still be used even when the number of steps in the iteration exceeds n.