2 - Adaptive Filtering for Sparse Models
Published online by Cambridge University Press: 24 November 2022
Summary
Chapter 2 presents several strategies to exploit sparsity in the parameters being estimated in order to obtain better estimates and accelerate convergence, two advantages of paramount importance when dealing with real problems requiring the estimation of many parameters. In these cases, the classical adaptive filtering algorithms exhibit a slow and often unacceptable convergence rate. In this chapter, many algorithms capable of exploiting sparse models are presented. Also, the two most widely used approaches to exploit sparsity are presented, and their pros and cons are discussed. The first approach explicitly models sparsity by relying on sparsity-promoting regularization functions. The second approach utilizes updates proportional to the magnitude of the coefficient being updated, thus accelerating the convergence of large magnitude coefficients. After reading this chapter, the reader will not only obtain a deeper understanding of the subject but also be able to adapt or develop algorithms based on his own needs.
- Type
- Chapter
- Information
- Online Learning and Adaptive Filters , pp. 16 - 59Publisher: Cambridge University PressPrint publication year: 2022