Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Abbreviations
- Algorithms
- PART I MATRIX ALGEBRA
- PART II MATRIX ANALYSIS
- 4 Gradient Analysis and Optimization
- 5 Singular Value Analysis
- 6 Solving Matrix Equations
- 7 Eigenanalysis
- 8 Subspace Analysis and Tracking
- 9 Projection Analysis
- PART III HIGHER-ORDER MATRIX ANALYSIS
- References
- Index
5 - Singular Value Analysis
from PART II - MATRIX ANALYSIS
Published online by Cambridge University Press: 25 October 2017
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Abbreviations
- Algorithms
- PART I MATRIX ALGEBRA
- PART II MATRIX ANALYSIS
- 4 Gradient Analysis and Optimization
- 5 Singular Value Analysis
- 6 Solving Matrix Equations
- 7 Eigenanalysis
- 8 Subspace Analysis and Tracking
- 9 Projection Analysis
- PART III HIGHER-ORDER MATRIX ANALYSIS
- References
- Index
Summary
Beltrami (1835–1899) and Jordan (1838–1921) are recognized as the founders of singular value decomposition (SVD): Beltrami in 1873 published the first paper on SVD [34]. One year later, Jordan published his own independent derivation of SVD [236]. Now, the SVD and its generalizations are one of the most useful and efficient tools for numerical linear algebra, and are widely applied in statistical analysis, and physics and in applied sciences, such as signal and image processing, system theory, control, communication, computer vision and so on.
This chapter presents first the concept of the stability of numerical computations and the condition number of matrices in order to introduce the necessity of the SVD of matrices; then we discuss SVD and generalized SVD together with their numerical computation and applications.
Numerical Stability and Condition Number
In many applications such as information science and engineering, it is often necessary to consider an important problem: in the actual observation data there exist some uncertainties or errors, and, furthermore, numerical calculation of the data is always accompanied by error. What is the impact of these errors? Is a particular algorithm numerically stable for data processing? In order to answer these questions, the following two concepts are extremely important:
(1) the numerical stability of various kinds of algorithm;
(2) the condition number or perturbation analysis of the problem of interest.
Let f be some application problem, let d* ∈ D be the data without noise or disturbance, where D denotes a data group, and let f(d*) ∈ F represent the solution of f, where F is a solution set. Given observed data d ∈ D, we want to evaluate f(d). Owing to background noise and/or observation error, f(d) is usually different from f(d*). If f(d) is “close” to f(d*) then the problem f is “well-conditioned”. On the contrary, if f(d) is obviously different from f(d*) even when d is very close to d* then we say that the problem f is “ill-conditioned”. If there is no further information about the problem f, the term “approximation” cannot describe the situation accurately.
- Type
- Chapter
- Information
- Matrix Analysis and Applications , pp. 271 - 314Publisher: Cambridge University PressPrint publication year: 2017