Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 9: Method of Least Squares and Chebyshev Approximation

Chapter 9: Method of Least Squares and Chebyshev Approximation

pp. 354-383

Authors

, Indian Institute of Technology, Roorkee
Resources available Unlock the full potential of this textbook with additional resources. There are Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Summary

Introduction

We have studied in the preceeding chapters various methods for fitting a polynomial to a given set of points, viz., Finite Difference methods, Lagrange's method, Divided Difference method and cubic splines. In all these methods (except Bezier/B-Splines) the polynomial passes through specified points. We say that the polynomial interpolates the given function (known or unknown) at the tabular points. In the method of Least Squares we fit a polynomial or some other function which may or may not pass through any of the data points.

Let us suppose we are given n pairs of values (xi, yi) where for some value xi of the variable x, the value of the variable y is given as yi, i = 1(1)n. In the Least Squares method, sequencing/ordering of xi's is not essential. Further, a pair of values may also repeat or there may be two or more values of y corresponding to the same value of x. If the data is derived from a single-valued function, then of course each pair will be unique. However, if an experiment is conducted several times, then the values may repeat. In the context of statistical survey, for the same value of variate x, there may be different outcomes of y; e.g., in the height versus weight study, there may be many individuals with different weights having same height and vice-versa.

Least Squares Method

Without loss of generality we shall assume in the following discussion that the values of x are not repeated and that they are exact while the corresponding values of y only, are subject to error. In the Least Squares method, we can approximate the given function (known or unknown) by a polynomial (or some other standard functions). If n data points (xi, yi), i = 1(1)n are given, then by least squares method, we can fit a polynomial of degree m, given by y = a0+a1x+a2x2 + … + amxm, mn − 1. When m = n − 1, the polynomial will coincide with the Lagrange's polynomial.

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$100.00
Paperback
US$100.00

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers