We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter discusses how to process data from irregular discrete domains, an emerging area called graph signal processing (GSP). Basically, the type of graph we deal with consists of a network with distributed vertices and weighted edges defining the neighborhood and the connections among the nodes. As such, the graph signals are collected in a vector whose entries represent the values of the signal nodes at a given time. A common issue related to GSP is the sampling problem, given the irregular structure of the data, where some sort of interpolation is possible whenever the graph signals are bandlimited or nearly bandlimited. These interpolations can be performed through the extension of the conventional adaptive filtering to signals distributed on graphs where there is no traditional data structure. The chapter presents the LMS, NLMS, and RLS algorithms for GSP along with their analyses and application to estimate bandlimited signals defined on graphs. In addition, the chapter presents a general framework for data-selective adaptive algorithms for GSP.
The chapter briefly introduces the main concepts of array signal processing, emphasizing those related to adaptive beamforming, and discusses how to impose linear constraints to adaptive filtering algorithms to achieve the beamforming effect. Adaptive beamforming, emphasizing the incoming signal impinging from a known direction by means of an adaptive filter, is the primary objective of the array signal processing addressed in this chapter. We start this study with the narrowband beamformer. The constrained LMS, RLS, conjugate gradient, and SMAP algorithms are introduced along with the generalized sidelobe canceller, and the Householder constrained structures; sparse promoting adaptive beamforming algorithms are also addressed in this chapter. In the following, it introduces the concepts of frequency-domain and time-domain broadband adaptive beamforming and shows their equivalence. The chapter wraps up with brief discussions and reference suggestions on essential topics related to adaptive beamforming, including the numerical robustness of adaptive beamforming algorithms.
This chapter explains the basic concepts of kernel-based methods, a widely used tool in machine learning. The idea is to present online parameter estimation of nonlinear models using kernel-based tools. The chapters aim is to introduce the kernel version of classical algorithms such as least mean square (LMS), recursive least squares (RLS), affine projection (AP), and set membership affine projection (SM-AP). In particular, we will discuss how to keep the dictionary of the kernel finite through a series of model reduction strategies. This way, all discussed kernel algorithms are tailored for online implementation.
It provides a brief description of the classical adaptive filtering algorithms, starting with defining the actual objective function each algorithm minimizes. It also includes a summary of the expected performance according to available results from the literature.
The chapter shows how the classical adaptive filtering algorithms can be adapted to distributed learning. In distributed learning, there is a set of adaptive filtering placed at nodes utilizing a local input and desired signals. These distributed networks of sensor nodes are located at distinct positions, which might improve the reliability and robustness of the parameter estimation in comparison to stand-alone adaptive filters. In distributed adaptive networks, parameter estimation might be obtained in a centralized form or a decentralized form. The centralized case processes the signals from all nodes of the network in a single fusion center, whereas in the decentralized case, processing is performed locally followed by a proper combination of partial estimates to result in a consensus parameter estimate. The main drawbacks of the centralized configuration are its data communication and computational costs, particularly in networks with a large number of nodes. On the other hand, the decentralized estimators require fewer data to feed the estimators and improve on robustness. The provides a discussion on equilibrium and consensus using arguments drawn from the pari-mutuel betting system. The expert opinion pool is the concept to induce improved estimation and data modeling, utilizing De-Groot’s algorithm and Markov chains as tools to probate equilibrium at consensus. It also introduces the distributed versions of the LMS and RLS adaptive filtering algorithms with emphasis on the decentralized parameter estimation case. This chapter also addresses how data broadcasting can be confined to a subset of nodes so that the overall network reduces the power consumption and bandwidth usage. Then, the chapter discusses a strategy to incorporate a data selection based on the SM adaptive filtering.
Chapter 2 presents several strategies to exploit sparsity in the parameters being estimated in order to obtain better estimates and accelerate convergence, two advantages of paramount importance when dealing with real problems requiring the estimation of many parameters. In these cases, the classical adaptive filtering algorithms exhibit a slow and often unacceptable convergence rate. In this chapter, many algorithms capable of exploiting sparse models are presented. Also, the two most widely used approaches to exploit sparsity are presented, and their pros and cons are discussed. The first approach explicitly models sparsity by relying on sparsity-promoting regularization functions. The second approach utilizes updates proportional to the magnitude of the coefficient being updated, thus accelerating the convergence of large magnitude coefficients. After reading this chapter, the reader will not only obtain a deeper understanding of the subject but also be able to adapt or develop algorithms based on his own needs.
Learn to solve the unprecedented challenges facing Online Learning and Adaptive Signal Processing in this concise, intuitive text. The ever-increasing amount of data generated every day requires new strategies to tackle issues such as: combining data from a large number of sensors; improving spectral usage, utilizing multiple-antennas with adaptive capabilities; or learning from signals placed on graphs, generating unstructured data. Solutions to all of these and more are described in a condensed and unified way, enabling you to expose valuable information from data and signals in a fast and economical way. The up-to-date techniques explained here can be implemented in simple electronic hardware, or as part of multi-purpose systems. Also featuring alternative explanations for online learning, including newly developed methods and data selection, and several easily implemented algorithms, this one-of-a-kind book is an ideal resource for graduate students, researchers, and professionals in online learning and adaptive filtering.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.