To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter reviews vectors and matrices, and basic properties like shape, orthogonality, determinant, eigenvalues, and trace. It also reviews operations like multiplication and transpose. These operations are used throughout the book and are pervasive in the literature. In short, arranging data into vectors and matrices allows one to apply powerful data analysis techniques over a wide spectrum of applications. Throughout, this chapter (and book) illustrates how the ideas are implemented in practice in Julia.
Many applications require solving a system of linear equations 𝑨𝒙 = 𝒚 for 𝒙 given 𝑨 and 𝒚. In practice, often there is no exact solution for 𝒙, so one seeks an approximate solution. This chapter focuses on least-squares formulations of this type of problem. It briefly reviews the 𝑨𝒙 = 𝒚 case and then motivates the more general 𝑨𝒙 ≈ 𝒚 cases. It then focuses on the over-determined case where 𝑨 is tall, emphasizing the insights offered by the SVD of 𝑨. It introduces the pseudoinverse, which is especially important for the under-determined case where 𝑨 is wide. It describes alternative approaches for the under-determined case such as Tikhonov regularization. It introduces frames, a generalization of unitary matrices. It uses the SVD analysis of this chapter to describe projection onto a subspace, completing the subspace-based classification ideas introduced in the previous chapter, and also introduces a least-squares approach to binary classifier design. It introduces recursive least-squares methods that are important for streaming data.
There are many applications of the low-rank signal-plus-noise model 𝒀 = 𝑿 + 𝒁 where 𝑿 is a low-rank matrix and 𝒁 is noise, such as denoising and dimensionality reduction. We are interested in the properties of the latent matrix 𝑿, such as its singular value decomposition (SVD), but all we are given is the noisy matrix 𝒀. It is important to understand how the SVD components of 𝒀 relate to those of 𝑿 in the presence of a random noise matrix 𝒁. The field of random matrix theory (RMT) provides insights into those relationships, and this chapter summarizes some key results from RMT that help explain how the noise in 𝒁 perturbs the SVD components, by analyzing limits as matrix dimensions increase. The perturbations considered include roundoff error, additive Gaussian noise, outliers, and missing data. This is the only chapter that requires familiarity with the distributions of continuous random variables, and it provides many pointers to the literature on this modern topic, along with several demos that illustrate remarkable agreement between the asymptotic predictions and the empirical performance even for modest matrix sizes.
This chapter focuses on artificial neural network models and methods. Although these methods have been studied for over 50 years, they have skyrocketed in popularity in recent years due to accelerated training methods, wider availability of large training sets, and the use of deeper networks that have significantly improved performance for many classification and regression problems. Previous chapters emphasized subspace models. Subspaces are very useful for many applications, but they cannot model all types of signals. For example, images of a single person’s face (in a given pose) under different lighting conditions lie in a subspace. However, a linear combination of face images from two different people will not look like a plausible face. Thus, all possible face images do not lie in a subspace. A manifold model is more plausible for images of faces (and handwritten digits) and other applications, and such models require more complicated algorithms. Entire books are devoted to neural network methods. This chapter introduces the key methods, focusing on the role of matrices and nonlinear operations. It illustrates the benefits of nonlinearity, and describes the classic perceptron model for neurons and the multilayer perceptron. It describes the basics of neural network training and reviews convolutional neural network models; such models are used widely in applications.
This chapter contains introductory material, including visual examples that motivate the rest of the book. It explains the book formatting, previews the notation, provides pointers for getting started with Julia, and briefly reviews fields and vector spaces.
This chapter discusses the important problem of matrix completion, where we know some, but not all, elements of a matrix and want to “complete” the matrix by filling in the missing entries. This problem is ill posed in general because one could assign arbitrary values to the missing entries, unless one assumes some model for the matrix elements. The most common model is that the matrix is low rank, an assumption that is reasonable in many applications. The chapter defines the problem and describes an alternating projection approach for noiseless data. It discusses algorithms for the practical case of missing and noisy data. It extends the methods to consider the effects of outliers with the robust principal component method, and applies this to video foreground/background separation. It describes nonnegative matrix factorization, including the case of missing data. A particularly famous application of low-rank matrix completion is the “Netflix problem”; this topic is also relevant to dynamic magnetic resonance image reconstruction, and numerous other applications with missing data (incomplete observations).
Previous chapters considered the Euclidean norm, the spectral norm, and the Frobenius norm. These three norms are particularly important, but there are many other important norms for applications. This chapter discusses vector norms, matrix norms, and operator norms, and uses these norms to analyze the convergence of sequences. It revisits the Moore–Penrose pseudoinverse from a norm-minimizing perspective. It applies norms to the orthogonal Procrustes problem and its extensions.
Now reissued by Cambridge University Press, the updated second edition of this definitive textbook provides an unrivaled introduction to the theoretical and practical fundamentals of wireless communications. Key technical concepts are developed from first principles, and demonstrated to students using over 50 carefully curated worked examples. Over 200 end-of-chapter problems, based on real-world industry scenarios, help cement student understanding. The book provides a thorough coverage of foundational wireless technologies, including wireless local area networks (WLAN), 3G systems, and Bluetooth along with refreshed summaries of recent cellular standards leading to 4G and 5G, insights into the new areas of mobile satellite communications and fixed wireless access, and extra homework problems. Supported online by a solutions manual and lecture slides for instructors, this is the ideal foundation for senior undergraduate and graduate courses in wireless communications.