To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
from
Part II
-
Wireless Networks for Machine Learning
Edited by
Yonina C. Eldar, Weizmann Institute of Science, Israel,Andrea Goldsmith, Princeton University, New Jersey,Deniz Gündüz, Imperial College of Science, Technology and Medicine, London,H. Vincent Poor, Princeton University, New Jersey
Edited by
Yonina C. Eldar, Weizmann Institute of Science, Israel,Andrea Goldsmith, Princeton University, New Jersey,Deniz Gündüz, Imperial College of Science, Technology and Medicine, London,H. Vincent Poor, Princeton University, New Jersey
Edited by
Yonina C. Eldar, Weizmann Institute of Science, Israel,Andrea Goldsmith, Princeton University, New Jersey,Deniz Gündüz, Imperial College of Science, Technology and Medicine, London,H. Vincent Poor, Princeton University, New Jersey
from
Part I
-
Machine Learning for Wireless Networks
Edited by
Yonina C. Eldar, Weizmann Institute of Science, Israel,Andrea Goldsmith, Princeton University, New Jersey,Deniz Gündüz, Imperial College of Science, Technology and Medicine, London,H. Vincent Poor, Princeton University, New Jersey
Edited by
Yonina C. Eldar, Weizmann Institute of Science, Israel,Andrea Goldsmith, Princeton University, New Jersey,Deniz Gündüz, Imperial College of Science, Technology and Medicine, London,H. Vincent Poor, Princeton University, New Jersey
This chapter studies the problem of (unsupervised) community detection on large random graphs, with a focus on the dense graph setting for both stochastic block model and its degree-corrected variant. Discussion on sparse graphs are made, however, via a stats-physics-inspired heuristic approach.
This chapter covers large neural networks with random weights, in both feed-forward and recurrent settings. While being rather different from modern deep neural networks, these preliminary results shed new light on the interplay between data, network structure, and nonlinear neurons, leading to the somewhat surprising double descent phenomenon. The impact of gradient-based optimization method on the resulting network and more advanced consideration of deep networks are also discussed.
In this chapter, examples of concrete machine learning and statistical inference/estimation problems involving covariance matrix-based estimators are discussed. This includes the generalized likelihood ratio test, the popular linear and quadratic discriminant analysis, subspace methods such as MUSIC for direction of arrival estimation, covariance distance estimation, as well as robust covariance estimation.
This chapter discusses the generalized linear classifier that results from convex optimization problem and takes in general nonexplicit form. Random matrix theory is combined with leave-one-out arguments to handle the technical difficulty due to implicity. Again, counterintuitive phenomena arise in popular machine learning methods such as logistic regression or SMV in the large-dimensional setting, a well-defined solution may not even exist, and if it does, it behaves dramatically from its small-dimensional counterpart.
This chapter covers the basics of random matrix theory, within the unified framework of resolvent- and deterministic-equivalent approach. Historical and foundational random matrix results are presented in the proposed framework, together with heuristic derivations as well as detailed proofs. Topics such as statistical inference and spiked models are covered. The concentration-of-measure framework, as a newly born yet very flexible and powerful technical approach, is discussed at the end of the chapter.
This chapter discusses the fundamental kernel methods, with applications to supervised (kernel ridge regression or LS-SVM), semi-supervised (graph Laplacian-based learning), or unsupervised learning (such as kernel spectral clustering) schemes. By focusing on the typical examples of distance and inner-product-type kernels, we show how large-dimensional kernel approach differs from our small-dimensional intuition, and perhaps more importantly, how random matrix theory plays a central role in understanding and tuning various kernel-based methods.
This chapter discusses the fundamentally different mental images of large-dimensional machine learning (versus its small-dimensional counterpart), through the examples of sample covariance matrices and kernel matrices, on both synthetic and real data. Random matrix theory is presented as a flexible and powerful tool to assess, understand, and improve classical machine learning methods in this modern large-dimensional setting.
This chapter exploits concentration-of-measure approach for real data modeling, via the recent advance of deep generative adversarial networks (GANs). This assessment theoretically supports the surprisingly good match between theory and practice observed on real-world data in previous chapters. Conclusion on the universality of large-dimensional machine learning is drawn at the end of the chapter.
This book presents a unified theory of random matrices for applications in machine learning, offering a large-dimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in real-world machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semi-supervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small- versus large-dimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as real-world data, with MATLAB and Python code provided on the accompanying website.