To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter focuses on artificial neural network models and methods. Although these methods have been studied for over 50 years, they have skyrocketed in popularity in recent years due to accelerated training methods, wider availability of large training sets, and the use of deeper networks that have significantly improved performance for many classification and regression problems. Previous chapters emphasized subspace models. Subspaces are very useful for many applications, but they cannot model all types of signals. For example, images of a single person’s face (in a given pose) under different lighting conditions lie in a subspace. However, a linear combination of face images from two different people will not look like a plausible face. Thus, all possible face images do not lie in a subspace. A manifold model is more plausible for images of faces (and handwritten digits) and other applications, and such models require more complicated algorithms. Entire books are devoted to neural network methods. This chapter introduces the key methods, focusing on the role of matrices and nonlinear operations. It illustrates the benefits of nonlinearity, and describes the classic perceptron model for neurons and the multilayer perceptron. It describes the basics of neural network training and reviews convolutional neural network models; such models are used widely in applications.
This chapter contains introductory material, including visual examples that motivate the rest of the book. It explains the book formatting, previews the notation, provides pointers for getting started with Julia, and briefly reviews fields and vector spaces.
This chapter discusses the important problem of matrix completion, where we know some, but not all, elements of a matrix and want to “complete” the matrix by filling in the missing entries. This problem is ill posed in general because one could assign arbitrary values to the missing entries, unless one assumes some model for the matrix elements. The most common model is that the matrix is low rank, an assumption that is reasonable in many applications. The chapter defines the problem and describes an alternating projection approach for noiseless data. It discusses algorithms for the practical case of missing and noisy data. It extends the methods to consider the effects of outliers with the robust principal component method, and applies this to video foreground/background separation. It describes nonnegative matrix factorization, including the case of missing data. A particularly famous application of low-rank matrix completion is the “Netflix problem”; this topic is also relevant to dynamic magnetic resonance image reconstruction, and numerous other applications with missing data (incomplete observations).
Previous chapters considered the Euclidean norm, the spectral norm, and the Frobenius norm. These three norms are particularly important, but there are many other important norms for applications. This chapter discusses vector norms, matrix norms, and operator norms, and uses these norms to analyze the convergence of sequences. It revisits the Moore–Penrose pseudoinverse from a norm-minimizing perspective. It applies norms to the orthogonal Procrustes problem and its extensions.
Now reissued by Cambridge University Press, the updated second edition of this definitive textbook provides an unrivaled introduction to the theoretical and practical fundamentals of wireless communications. Key technical concepts are developed from first principles, and demonstrated to students using over 50 carefully curated worked examples. Over 200 end-of-chapter problems, based on real-world industry scenarios, help cement student understanding. The book provides a thorough coverage of foundational wireless technologies, including wireless local area networks (WLAN), 3G systems, and Bluetooth along with refreshed summaries of recent cellular standards leading to 4G and 5G, insights into the new areas of mobile satellite communications and fixed wireless access, and extra homework problems. Supported online by a solutions manual and lecture slides for instructors, this is the ideal foundation for senior undergraduate and graduate courses in wireless communications.