To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let us suppose, there are given (n+1) pair of values (xi, yi), i = 0(1)n. This data may be an outcome of an experiment in which for different values of a variable x, the values of y are observed; thus no relation between variables x and y is known. Alternatively, it may be that the values of a known function y = f(x) are given for specific values of x. The abscissas xi, i = 0(1)n are called tabular points or nodal/pivotal points. Without loss of generality, we can assume x0 < x1 < x2 … < xn, i.e., the values of y are prescribed at (n+1) points in the interval [x0, xn]. Interpolation means to find the value of y for some intermediate value of x in (x0, xn). If x lies outside the interval (x0, xn), the process is called ‘extrapolation’.
The methods for interpolation may be put into two categories according to whether the tabular points are equidistant (equally spaced or equi-spaced or evenly/uniformly spaced) or they are not necessarily at equal interval; in other words, whether the interval xi − xi−1, i = 1(1)n is same throughout or not. In any case, it will be assumed that the behaviour of y w.r.t. x is smooth i:e. there are no sudden variations in the value of y.
The basis for an interpolation method is to approximate the data by some function y = F(x), say, which may satisfy all the data points or only some of them. The function is called interpolating function and the points on which the function F(x) is based are called interpolating points. Although there may be several functions interpolating the same data, we shall be confined to polynomial approximation in one form or another. Before describing various interpolation methods let us give some essential preliminaries which will be required in the development of the methods.
We discussed boundary value problems (bvp's) in one-dimension in Chapter 7 and in two dimensions (elliptic equations) in Chapter 11. In case of ordinary differential equation the domain of the problem was a finite interval along, say x-axis. Some nodal points or nodes were selected over the interval by subdividing it into uniform (evenly-spaced) subintervals/ subdomains. Similarly an elliptic equation was defined over a domain in the x-y plane which was subdivided into rectangular/square meshes. In both cases the sizes of the subdomains were of uniform size except perhaps near the boundary. The solution was obtained by approximating the derivatives by finite differences (forward, backward or central). These methods are called Finite Difference Methods (FDMs).
The Finite Element Method (FEM) in this chapter, will be discussed mainly in respect of boundary value problems in one and two dimensions only. It can be used for solving transient (time-dependent) problems also and in that case, as one of the methods, the time derivative will be approximated by the finite difference techniques. They are also discussed briefly. The FEM approach for solving a bvp differs from FDM in mainly two aspects. First, in the FEM, the domain may be subdivided in an arbitrary manner. For example, in two dimension it is usually subdivided into triangles which are not necessarily of equal sizes – on the other hand they are often of unequal sizes. This gives us the freedom to choose larger triangles in the part of the domain where variation in the solution is expected to be not too large and choose smaller triangles where solution may vary too sharply or it may be too sensitive over some part of the domain. These triangles (or polygons in general) are called ‘elements’ and vertices of the triangles (or polygons) as ‘nodes’. We are required to find the solution at these nodes. The second difference is with regard to the mathematical strategy adopted in FEM which is totally different from FDM.
We have studied in the preceeding chapters various methods for fitting a polynomial to a given set of points, viz., Finite Difference methods, Lagrange's method, Divided Difference method and cubic splines. In all these methods (except Bezier/B-Splines) the polynomial passes through specified points. We say that the polynomial interpolates the given function (known or unknown) at the tabular points. In the method of Least Squares we fit a polynomial or some other function which may or may not pass through any of the data points.
Let us suppose we are given n pairs of values (xi, yi) where for some value xi of the variable x, the value of the variable y is given as yi, i = 1(1)n. In the Least Squares method, sequencing/ordering of xi's is not essential. Further, a pair of values may also repeat or there may be two or more values of y corresponding to the same value of x. If the data is derived from a single-valued function, then of course each pair will be unique. However, if an experiment is conducted several times, then the values may repeat. In the context of statistical survey, for the same value of variate x, there may be different outcomes of y; e.g., in the height versus weight study, there may be many individuals with different weights having same height and vice-versa.
Least Squares Method
Without loss of generality we shall assume in the following discussion that the values of x are not repeated and that they are exact while the corresponding values of y only, are subject to error. In the Least Squares method, we can approximate the given function (known or unknown) by a polynomial (or some other standard functions). If n data points (xi, yi), i = 1(1)n are given, then by least squares method, we can fit a polynomial of degree m, given by y = a0+a1x+a2x2 + … + amxm, m ≤ n − 1. When m = n − 1, the polynomial will coincide with the Lagrange's polynomial.