from Part I - Mathematical foundations
Published online by Cambridge University Press: 18 December 2015
In this chapter, the problem of sparsity-aware distributed learning is studied. In particular, we consider the setup of an ad-hoc network, the nodes of which are tasked to estimate, in a collaborative way, a sparse parameter vector of interest. Both batch and online algorithms will be discussed. In the batch learning context, the distributed LASSO algorithm and a distributed greedy technique will be presented. Furthermore, an LMS-based sparsity promoting algorithm, revolving around the l1 norm, as well as a greedy distributed LMS will be discussed. Moreover, a set-theoretic sparsity promoting distributed technique will be examined. Finally, the performance of the presented algorithms will be validated in several scenarios.
Introduction
The volume of data captured worldwide is growing at an exponential rate posing certain challenges regarding their processing and analysis. Data mining, regression, and prediction/forecasting have played a leading role in learning insights and extracting useful information from raw data. The employment of such techniques covers a wide range of applications in several areas such as biomedical, econometrics, forecasting sales models, content preference, etc. The massive amount of data produced together with their increased complexity (new types of data emerge) as well as their involvement in the Internet of Things [1] paradigm call for further advances in already established machine learning techniques in order to cope with the new challenges.
Even though data tend to live in high-dimensional spaces, they often exhibit a high degree of redundancy; that is, their useful information can be represented by using a number of attributes much lower compared to their original dimensionality. Often, this redundancy can be effectively exploited by treating the data in a transformed domain, in which they can be represented by sparse models; that is, models comprising a few nonzero parameters. Besides, sparsity is an attribute that is met in a plethora of models, modeling natural signals, since nature tends to be parsimonious. Such sparse structures can be effectively exploited in big data applications in order to reduce processing demands. The advent of compressed sensing led to novel theoretical as well as algorithmic tools, which can be efficiently employed for sparsity-aware learning, e.g. [2–7].
In many cases, processing of large amount of data is not only cumbersome but might be proved to be infeasible due to lack of processing power and/or of storage capabilities.
To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.