Linear algebra is part of the foundation of mathematics and has widespread usage in engineering. In this chapter, we specialize the linear analysis of Chapter 6 to finite-dimensional vector spaces in which the linear operator is a constant matrix. Many of the topics will be familiar, and some will likely be new. Considerable effort is spent defining terms and finding the best solution to systems of linear algebraic equations. As nearly all computational methods for solution of equations modeling physical systems rely on linear algebra, our expansive treatment is justified. Throughout the chapter, geometric interpretations are applied when appropriate. Some topics introduced in previous chapters are more fully explored, including matrices that effect rotation and reflection, projection matrices, eigenvalues and eigenvectors, and quadratic forms. New topics include a variety of matrix decompositions that are widely used in computational linear algebra. Of these the most important is the so-called singular value decomposition (SVD). We also give a matrix interpretation of two methods in wide use in engineering: (1) the least squares method and (2) the discrete Fourier transform. We close with a general strategy to find the best solution to linear algebra systems based on the SVD. In contrast to Chapter 6, we return in this chapter to Gibbs notation for vectors and matrices. Thus, matrices will be represented by uppercase bold-faced letters, such as A, and vectors by lowercase bold-faced letters, such as x.
One of the most important problems in linear algebra lies in addressing the equation
A · x = b, (7.1)
where A is a known constant matrix, b is a known column vector, and x is an unknown column vector. We note the analog to linear differential equations with the general form of Eq. (4.1), Ly = f(x). Here the matrix A plays the role of the differential operator L, the vector x plays the rule of the function y, and the vector b plays the role of the forcing function f(x).