We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We continue our study of exponent semigroups of rational matrices. Our main result is that the matricial dimension of a numerical semigroup is at most its multiplicity (the least generator), greatly improving upon the previous upper bound (the conductor). For many numerical semigroups, including all symmetric numerical semigroups, our upper bound is tight. Our construction uses combinatorially structured matrices and is parametrised by Kunz coordinates, which are central to enumerative problems in the study of numerical semigroups.
We study symmetric and antisymmetric tensor products of Hilbert-space operators, focusing on norms and spectra for some well-known classes favored by function-theoretic operator theorists. We pose many open questions that should interest the field.
We improve and expand in two directions the theory of norms on complex matrices induced by random vectors. We first provide a simple proof of the classification of weakly unitarily invariant norms on the Hermitian matrices. We use this to extend the main theorem in Chávez, Garcia, and Hurley (2023, Canadian Mathematical Bulletin 66, 808–826) from exponent $d\geq 2$ to $d \geq 1$. Our proofs are much simpler than the originals: they do not require Lewis’ framework for group invariance in convex matrix analysis. This clarification puts the entire theory on simpler foundations while extending its range of applicability.
Using a modern matrix-based approach, this rigorous second course in linear algebra helps upper-level undergraduates in mathematics, data science, and the physical sciences transition from basic theory to advanced topics and applications. Its clarity of exposition together with many illustrations, 900+ exercises, and 350 conceptual and numerical examples aid the student's understanding. Concise chapters promote a focused progression through essential ideas. Topics are derived and discussed in detail, including the singular value decomposition, Jordan canonical form, spectral theorem, QR factorization, normal matrices, Hermitian matrices, and positive definite matrices. Each chapter ends with a bullet list summarizing important concepts. New to this edition are chapters on matrix norms and positive matrices, many new sections on topics including interpolation and LU factorization, 300+ more problems, many new examples, and color-enhanced figures. Prerequisites include a first course in linear algebra and basic calculus sequence. Instructor's resources are available.
Chapter 2: Linearly independent lists of vectors that span a vector space are of special importance. They provide a bridge between the abstract world of vector spaces and the concrete world of matrices. They permit us to define the dimension of a vector space and motivate the concept of matrix similarity.
Chapter 1: In this chapter, we provide formal definitions of real and complex vector spaces, and many examples. Among the important concepts introduced are linear combinations, span, linear independence, and linear dependence.
Chapter 6: In this chapter, we explore the role of orthonormal (orthogonal and normalized) vectors in an inner-product space. Matrix representations of linear transformations with respect to orthonormal bases are of particular importance. They are associated with the notion of an adjoint transformation. We give a brief introduction to Fourier series that highlights the orthogonality properties of sine and cosine functions. In the final section of the chapter, we discuss orthogonal polynomials and the remarkable numerical integration rules associated with them.
Chapter 13: In this chapter, we discuss several problems in which a similarity transformation to Jordan canonical form facilitates a solution. For example, we find that A is similar to AT; limp→∞Ap = 0 if and only if every eigenvalue of A has modulus less than 1; and the invertible Jordan blocks of AB and BA are the same. We begin by considering coupled systems of ordinary differential equations.
Chapter 3: A matrix is not just an array of scalars. It can be thought of as an array of submatrices in many different ways. We begin by regarding a matrix as an array of columns and we explore some implications of this viewpoint for matrix products and Cramer's rule. We turn to arrays of rows, which lead to additional insights for matrix products. We discuss determinants of block matrices, block versions of elementary matrices, and Cauchy's formula for the determinant of a bordered matrix. Finally, we introduce the Kronecker product, which provides a way to construct block matrices with a special structure.
Chapter 12: In the preceding chapter, we found that each square complex matrix A is similar to a direct sum of upper triangular unispectral matrices. We now show that A is similar to a direct sum of Jordan blocks (unispectral upper bidiagonal matrices with 1s in the superdiagonal) that is unique up to permutation of its direct summands.
Chapter 19: In this chapter, we introduce new examples of norms, with special attention to submultiplicative norms on matrices. These norms are well-adapted to applications involving power series of matrices and iterative numerical algorithms. We use them to prove a formula for the spectral radius that is the key to a fundamental theorem on positive matrices in the next chapter.
Chapter 17: In this chapter, we investigate applications and consequences of the singular value decomposition. For example, it provides a systematic way to approximate a matrix by a matrix of lower rank. It also permits us to define a generalized inverse for matrices that are not invertible (and need not even be square). The singular value decomposition has a pleasant special form for complex symmetric matrices. The largest singular value is especially important; it turns out to be a norm (the spectral norm) on matrices. We use the spectral norm to study how the solution of a linear system changes if the system is perturbed, and how the eigenvalues of a matrix can change if it is perturbed.
Chapter 5: Many abstract concepts that make linear algebra a powerful mathematical tool have their roots in plane geometry, so we begin the study of inner product spaces with a review of basic properties of lengths and angles in the real two-dimensional plane. Guided by these geometrical properties, we formulate axioms for inner products and norms, which provide generalized notions of length (norm) and perpendicularity (orthogonality) in abstract vector spaces.
Chapter 8: Many problems in applied mathematics involve finding a minimum-norm solution or a best approximation, subject to certain constraints. Orthogonal subspaces arise frequently in solving such problems. Among the topics we discuss in this chapter are the minimum-norm solution to a consistent linear system, a least-squares solution to an inconsistent linear system, and orthogonal projections.