To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 8: Many problems in applied mathematics involve finding a minimum-norm solution or a best approximation, subject to certain constraints. Orthogonal subspaces arise frequently in solving such problems. Among the topics we discuss in this chapter are the minimum-norm solution to a consistent linear system, a least-squares solution to an inconsistent linear system, and orthogonal projections.
Chapter 4: In this chapter, we collect some important facts about matrices: the rank-nullity theorem; the intersection and sum of column spaces; rank inequalities for sums and products of matrices; the LU factorization and solutions of linear systems; row equivalence, the pivot column decomposition, and the reduced row echelon form. In a final capstone section, we use linear dependence, the trace, block matrices, induction, and similarity to characterize matrices that are commutators. Throughout the chapter, we emphasize block-matrix methods.
Chapter 7: Unitary matrices play important roles in theory and computation. The adjoint of a unitary matrix is its inverse, so unitary matrices are easy to invert. They preserve lengths and angles, and have remarkable stability properties in many numerical algorithms. In this chapter, we explore the properties of unitary matrices and present several special cases. We derive an explicit formula for a unitary matrix whose first column is given. We give a constructive proof of the QR factorization and show that every square complex matrix is unitarily similar to an upper Hessenberg matrix.
The main focus of this chapter is the estimation of the distribution function and probability (density) function of duration and loss variables. The methods used depend on whether the data are for individual or grouped observations, and whether the observations are complete or incomplete.
Chapter 16: This chapter is about the singular value decomposition, a matrix factorization that shares some features with the spectral decomposition of a normal matrix. However, every real or complex matrix, normal or not, square or not, has a singular value decomposition. It was discovered in the nineteenth century and was applied in the 1930s to problems in psychometrics. Widespread use of the singular value decomposition in applications began only in the 1960s, when accurate and efficient algorithms were developed for its numerical computation. Today, singular value subroutines are embedded in thousands of computer algorithms to solve problems in data analysis, image compression, chemical physics, least-squares problems, low-dimensional approximation of high-dimensional data, genomics, robotics, and many other fields.
Chapter 20: This chapter is about some remarkable properties of positive matrices, by which we mean square matrices with real positive entries. Positive matrices are found in economic models, genetics, biology, team rankings, network analysis, Google's PageRank, and city planning. The spectral radius of any matrix is the absolute value of an eigenvalue, but for a positive matrix the spectral radius itself is an eigenvalue, and it is positive and dominant. It is associated with a positive eigenvector, whose ordered entries have been used for ranking sports teams, priority setting, and resource allocation in multicriteria decision-making. Since the spectral radius is a dominant eigenvalue, an associated positive eigenvector can be computed by the power method. Some properties of positive matrices are shared by nonnegative matrices that satisfy certain auxiliary conditions. One condition that we investigate in this chapter is that some positive power has no zero entries.
A useful way to solve a complex problem – whether in physics, mathematics, or life in general – is to break it down into smaller pieces that can be handled more easily. This is especially true of the Ising model. In this chapter, we investigate various partial-summation techniques in which a subset of Ising spins is summed over to produce new, effective couplings among the remaining spins. These methods are useful in their own right and are even more important when used as a part of position-space renormalization-group techniques.
Chapter 9: In the next four chapters, we develop tools to show that each square complex matrix is similar to an essentially unique direct sum of special bidiagonal matrices (the Jordan canonical form). The first step is to show that each square complex matrix has a one-dimensional invariant subspace and explore some consequences of that fact.
Chapter 15: Many interesting mathematical ideas evolve from analogies. If we think of matrices as analogs of complex numbers, then the representation z = a + bi suggests the Cartesian decomposition A = H + iK of a square complex matrix, in which Hermitian matrices play the role of real numbers. Hermitian matrices with nonnegative eigenvalues are natural analogs of nonnegative real numbers. They arise in statistics (correlation matrices and the normal equations for least-squares problems), Lagrangian mechanics (the kinetic energy functional), and quantum mechanics (density matrices). They are the subject of this chapter.