To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès.
Using easy-to-follow mathematics, this textbook provides comprehensive coverage of block codes and techniques for reliable communications and data storage. It covers major code designs and constructions from geometric, algebraic, and graph-theoretic points of view, decoding algorithms, error control additive white Gaussian noise (AWGN) and erasure, and dataless recovery. It simplifies a highly mathematical subject to a level that can be understood and applied with a minimum background in mathematics, provides step-by-step explanation of all covered topics, both fundamental and advanced, and includes plenty of practical illustrative examples to assist understanding. Numerous homework problems are included to strengthen student comprehension of new and abstract concepts, and a solutions manual is available online for instructors. Modern developments, including polar codes, are also covered. An essential textbook for senior undergraduates and graduates taking introductory coding courses, students taking advanced full-year graduate coding courses, and professionals working on coding for communications and data storage.
In the past decade or so, (deep) neural networks have captured people’s imagination through their empirical success in learning problems involving real-world high-dimensional data such as images, speech, and text [LBH15]. Nevertheless, there is quite a bit of mystery as to how deep networks achieve such striking results. Modern deep networks are typically designed through trial and error.
In the previous theoretical Part I of the book, we showed that under fairly broad conditions on the number of measurements needed, many important classes of structured signals can be recovered via computationally tractable optimization problems, such as ℓ1 minimization for recovering sparse signals and nuclear norm minimization for recovering low-rank matrices.
As engineering and the sciences become increasingly data- and computation-driven, the role of optimization has expanded to touch almost every stage of the data analysis pipeline, from the signal and data acquisition to modeling, analysis, and prediction.