To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter discusses the fundamentally different mental images of large-dimensional machine learning (versus its small-dimensional counterpart), through the examples of sample covariance matrices and kernel matrices, on both synthetic and real data. Random matrix theory is presented as a flexible and powerful tool to assess, understand, and improve classical machine learning methods in this modern large-dimensional setting.
This chapter exploits concentration-of-measure approach for real data modeling, via the recent advance of deep generative adversarial networks (GANs). This assessment theoretically supports the surprisingly good match between theory and practice observed on real-world data in previous chapters. Conclusion on the universality of large-dimensional machine learning is drawn at the end of the chapter.
This book presents a unified theory of random matrices for applications in machine learning, offering a large-dimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in real-world machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semi-supervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small- versus large-dimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as real-world data, with MATLAB and Python code provided on the accompanying website.
Chapter 10, in contrast to all the previous chapters that focused on the performance of the downlink, analyzes the performance of the uplink of an ultra-dense network. Importantly, this chapter shows that the phenomena presented in – and the conclusions derived from – all the previous chapters also apply to the uplink, despite its different features, e.g. uplink transmit power control, inter-cell interference source distribution. System-level simulations are used in this chapter to conduct the study.
Chapter 9, using the new capacity scaling law presented in the previous chapter, explores three relevant network optimization problems: i) the small cell base station deployment/activation problem, ii) the network-wide user equipment admission/scheduling problem, and iii) the spatial spectrum reuse problem. These problems are formally presented, and exemplary solutions are provided, with the corresponding discussion on the intuition behind the proposed solutions.
Chapter 11 shows the benefits of dynamic time division duplexing with respect to a more static time division duplexing assignment of time resources in an ultra-dense network. As studied in previous chapters, the amount of user equipment per small cell reduces significantly in a denser network. As a result, a dynamic assignment of time resources to the downlink and the uplink according to the load in each small cell can avoid resource waste, and significantly enhance its capacity. The dynamic time division duplexing protocol is modelled and analyzed through system-level simulations in this chapter too, and its performance carefully examined.
We prove several results about the asymptotics of the distributions of nonnormalized CRPs Z(t) and Y(t). These results, known as integro-local theorems, are sharper than the central limit theorem and are concerned with the probabilities of Z(t) and Y(t) hitting intervals of small length in the normal deviation zone.
Chapter 3 summarizes the modelling, derivations and main findings of probably one of the most important works on small cell theoretical performance analysis, which concluded that the fears of an inter-cell interference overload in small cell networks were not well-grounded, and that the network capacity – or in more technical words, the area spectral efficiency – linearly grows with the number of deployed small cells. This research was the cornerstone of much of the research that followed on small cells performance analysis.
We continue the study of integro-local probabilities that was initiated in Chapter 2 in the normal deviation zone. Now, assuming that the vector (?, ?) satisfies the Cramér moment condition, we study the integro-local probability in a wider zone, which in analogy with random walks can be called the Cramér deviation zone. This zone includes the zones of normal, moderately large, and "usual" large deviations.
Chapter 1 introduces the capacity challenge faced by modern wireless communication systems and presents ultra-dense wireless networks as an appealing solution to address it. Moreover, it provides background on the small cell concept – the fundamental building block of an ultra-dense wireless network – describing its main characteristics, benefits and drawbacks. This chapter also presents the structure of the book and the fundamental concepts required for its systematic understanding.