To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Connecting theory with practice, this systematic and rigorous introduction covers the fundamental principles, algorithms and applications of key mathematical models for high-dimensional data analysis. Comprehensive in its approach, it provides unified coverage of many different low-dimensional models and analytical techniques, including sparse and low-rank models, and both convex and non-convex formulations. Readers will learn how to develop efficient and scalable algorithms for solving real-world problems, supported by numerous examples and exercises throughout, and how to use the computational tools learnt in several application contexts. Applications presented include scientific imaging, communication, face recognition, 3D vision, and deep networks for classification. With code available online, this is an ideal textbook for senior and graduate students in computer science, data science, and electrical engineering, as well as for those taking courses on sparsity, low-dimensional structures, and high-dimensional data. Foreword by Emmanuel Candès.
This chapter introduces the background, development history, and typical applications of edge learning. It also specifies the main challenges faced by edge learning from the aspects of data, communication, and computation.
In this chapter, we first provide convergence results of Stochastic Gradient Descent (SGD) methods that are usually adopted to solve the machine learning problem. Then, we introduce advanced training algorithms including momentum SGD, Hyper-parameter-based algorithms, and optimization algorithms for deep learning models. At last, we give theoretical frameworks about how to deal with the staleness gradient incurred by ASP or SSP.
This chapter first focuses on model compression and hardware acceleration for edge learning. It covers many aspects, including the learning algorithms, learning-oriented communication, distributed machine learning with hardware adaptation, TEE-based privacy protection, algorithm, and hardware joint optimization, etc. The essential objective is to implement an integrated algorithm-hardware platform, to optimize the implementation of emerging machine learning algorithms, to fully explore the potential of modern computation hardware, and to promote novel intelligent applications for sophisticated services. Then, we introduce straggler tolerance schemes that can avoid the overall training performance seriously degraded by faulty nodes, and can adequately utilize the computation power of slow nodes. At last, we introduce computation acceleration technologies for inference at the edge.