Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-9pm4c Total loading time: 0 Render date: 2024-04-27T08:51:06.839Z Has data issue: false hasContentIssue false

2 - Adaptive Filtering for Sparse Models

Published online by Cambridge University Press:  24 November 2022

Paulo S. R. Diniz
Affiliation:
Universidade Federal do Rio de Janeiro
Marcello L. R. de Campos
Affiliation:
Universidade Federal do Rio de Janeiro
Wallace A. Martins
Affiliation:
University of Luxembourg
Markus V. S. Lima
Affiliation:
Universidade Federal do Rio de Janeiro
Jose A. Apolinário, Jr
Affiliation:
Military Institute of Engineering
HTML view is not available for this content. However, as you have access to this content, a full PDF is available via the 'Save PDF' action button.

Summary

Chapter 2 presents several strategies to exploit sparsity in the parameters being estimated in order to obtain better estimates and accelerate convergence, two advantages of paramount importance when dealing with real problems requiring the estimation of many parameters. In these cases, the classical adaptive filtering algorithms exhibit a slow and often unacceptable convergence rate. In this chapter, many algorithms capable of exploiting sparse models are presented. Also, the two most widely used approaches to exploit sparsity are presented, and their pros and cons are discussed. The first approach explicitly models sparsity by relying on sparsity-promoting regularization functions. The second approach utilizes updates proportional to the magnitude of the coefficient being updated, thus accelerating the convergence of large magnitude coefficients. After reading this chapter, the reader will not only obtain a deeper understanding of the subject but also be able to adapt or develop algorithms based on his own needs.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2022

References

Diniz, P. S. R., Yazdanpanah, H., and M. V. S. Lima, Feature LMS algorithms. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), Calgary, AB, 2018, pp. 4144-4148.Google Scholar
Yazdanpanah, H., S. R. Diniz, P., and M. V. S. Lima, Feature adaptive filtering: Exploiting hidden sparsity, IEEE Transactions on Circuits and Systems I: Regular Papers 67, pp. 2358-2371 (2020).Google Scholar
Yazdanpanah, H. and A. Apolinario Jr., J., The extended feature LMS algorithm: Exploiting hidden sparsity for systems with unknown spectrum, Circuits, Systems, and Signal Processing 40, pp. 174-192 (2021).Google Scholar
Haddad, D. B., O. dos Santos, L., Almeida, L. F., A. S. Santos, G., and M. R. Petraglia, ^2-norm feature least mean square algorithm, Electronics Letters 56, pp. 516-519 (2020).Google Scholar
Diniz, P. S. R., Yazdanpanah, H., and M. V. S. Lima, Feature LMS algorithm for bandpass system models. Proceedings of the 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2019, pp. 1-5.CrossRefGoogle Scholar
Yazdanpanah, H., S. R. Diniz, P., and M. V. S. Lima, Low-complexity feature stochastic gradient algorithm for block-lowpass systems, IEEE Access 7, 141587-141593 (2019).Google Scholar
Bishop, C. M., Pattern Recognition and Machine Learning (Springer, New York, 2006).Google Scholar
Trefethen, L. N. and Bau, D., III, Numerical Linear Algebra (SIAM, Philadelphia, 1997).Google Scholar
Lima, M. V. S., Chaves, G. S., Ferreira, T. N., and P. S. R. Diniz, Do proportionate algorithms exploit sparsity?, ArXiv: http://arxiv.org/abs/2108.06846.Google Scholar
Hastie, T., Tibshirani, R., and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, New York, 2017).Google Scholar
Lima, M. V. S., Ferreira, T. N., Martins, W. A., and P. S. R. Diniz, Sparsity-aware data-selective adaptive filters, IEEE Transactions on Signal Processing 62, pp. 4557-4572 (2014).Google Scholar
Eldar, Y. and Kutyniok, G., Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012).Google Scholar
Bourbaki, N., Topological Vector Spaces (Springer, New York, 1987).Google Scholar
Kothe, G., Topological Vector Spaces I (Springer, New York, 1983).Google Scholar
Claerbout, J. F. and Muir, F., Robust modeling with erratic data, Geophysics 38, pp. 826-844 (1973).CrossRefGoogle Scholar
Santosa, F. and Symes, W. W., Linear inversion of band-limited reflection seis- mograms, SIAM Journal on Scientific Computing 7, pp. 1307-1330 (1986).Google Scholar
Chen, S. S., Donoho, D. L., and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM Journal on Scientific Computing 20, pp. 33-61 (1998).Google Scholar
Tibshirani, R., Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical Society, Series B (Methodological) 58, pp. 267-288 (1996).Google Scholar
Candes, E. J. and Wakin, M. B., An introduction to compressive sampling, IEEE Signal Processing Magazine 25, pp. 21-30 (2008).Google Scholar
Candes, E. J., Wakin, M. B., and S. P. Boyd, Enhancing sparsity by reweighted i\ minimization, Journal of Fourier Analysis and Applications 14, pp. 877-905 (2008).Google Scholar
Lima, M. V. S., Martins, W. A., and P. S. R. Diniz, Affine projection algorithms for sparse system identification. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013), Vancouver, Canada, May 2013, pp. 5666-5670.Google Scholar
Mancera, L. and Portilla, J., L0-norm-based sparse representation through alternate projections. Proceedings of the IEEE International Conference on Image Processing (ICIP 2006), Atlanta, GA, USA, October 2006, pp. 2089-2092.Google Scholar
Chen, Y., Gu, Y., and A. O. Hero, Sparse LMS for system identification. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009), Taipei, Taiwan, April 2009, pp. 3125-3128.Google Scholar
Trzasko, J. and Manduca, A., Highly undersampled magnetic resonance image reconstruction via homotopic ^-minimization, IEEE Transactions on Medical Imaging 28, 106-121 (2009).Google Scholar
Gu, Y., Jin, J., and S. Mei, £0 norm constraint LMS algorithm for sparse system identification, IEEE Signal Processing Letters 16, pp. 774-777 (2009).Google Scholar
Huber, P., Robust Statistics (Wiley, New York, 1981).Google Scholar
Geman, D. and Reynolds, G., Nonlinear image recovery with half-quadratic regu- larization, IEEE Transactions on Image Processing 4, 932-946 (1995).Google Scholar
Lima, M. V. S., Sobron, I., Martins, W. A., and P. S. R. Diniz, Stability and MSE analyses of affine projection algorithms for sparse system identification. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014), Florence, Italy, May 2014, pp. 6399-6403.Google Scholar
Mohimani, H., Babaie-Zadeh, M., and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed 10 norm, IEEE Transactions on Signal Processing 57, 289-301 (2009).Google Scholar
Diniz, P. S. R., Adaptive Filtering: Algorithms and Practical Implementation, 5th ed. (Springer, New York, 2020).Google Scholar
Meng, R., C. de Lamare, R., and V. H. Nascimento, Sparsity-aware affine projection adaptive algorithms for system identification. Proceedings of the Sensor Signal Processing for Defense (SSPD 2011), London, UK, September 2011, pp. 1-5.Google Scholar
Yazdanpanah, H. and S. R. Diniz, P., Recursive least-squares algorithms for sparse system modeling. Proceedings of the 2017 IEEE International Conference on Acoustics Speech and Signal Processing, New Orleans, LA, March 2017, pp. 3878-3883.Google Scholar
Yazdanpanah, H., V. S. Lima, M., and P. S. R. Diniz, On the robustness of set- membership adaptive filtering algorithms, EURASIP Journal on Advances in Signal Processing 2017, pp. 1-12 (2017).CrossRefGoogle Scholar
Combettes, P. L., The foundations of set theoretic estimation, Proceedings of the IEEE 81, pp. 182-208 (1993).Google Scholar
S. Lima and P. S. R. Diniz, M. V., Fast learning set theoretic estimation. Proceedings of the 21st European Signal Processing Conference (EUSIPCO), Marrakesh, Morocco, 2013, pp. 1-5.Google Scholar
Yazdanpanah, H., A. Apolinario Jr., J., S. R. Diniz, P., and M. V. S. Lima, ^o-norm feature LMS algorithms. Proceedings of the IEEE Global Conference on Signal and Information Processing (GlobalSIP), Anaheim, CA, USA, 2018, pp. 311-315.Google Scholar
Yazdanpanah, H., Carini, A., and M. V. S. Lima, L0-norm adaptive Volterra filters. Proceedings of the 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2019, pp. 1-5.Google Scholar
Mathews, V. and Sicuranza, G., Polynomial Signal Processing (Wiley, New York, 2000).Google Scholar
Fermo, A., Carini, A., and G. L. Sicuranza, Low complexity nonlinear adaptive filters for acoustic echo cancellation, European Transactions on Telecommunications 14, pp. 161-169 (2003).Google Scholar
Mohri, M., Rostamizadeh, A., and A. Tawalkar, Foundations of Machine Learning, 2nd ed. (MIT Press, Cambridge, USA, 2018).Google Scholar
Aggarwal, C. C., Neural Networks and Deep Learning (Springer, Switzerland, 2018).Google Scholar
do Prado, R. A., Guedes, R. M., Henriques, F. R., M. da Costa, F., D. T. J. Tar- rataca, L., and D. B. Haddad, On the analysis of the incremental £0-LMS algorithm for distributed systems, Circuits, Systems, and Signal Processing 40, pp. 845-871 (2021).Google Scholar
Jiang, S. and Gu, Y., Block-sparsity-induced adaptive filter for multi-clustering system identification, IEEE Transactions on Signal Processing 63, pp. 5318-5330 (2015).Google Scholar
Li, Y., Jiang, Z., Jin, Z., Han, X., and J. Yin, Cluster-sparse proportionate NLMS algorithm with the hybrid norm constraint, IEEE Access 6, pp. 47794-47803 (2018).Google Scholar
Duttweiler, D. L., Proportionate normalized least-mean-squares adaptation in echo cancelers, IEEE Transactions on Speech and Audio Processing 8, pp. 508518 (2000).Google Scholar
Benesty, J. and Gay, S. L., An improved PNLMS algorithm. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2002), Dallas, USA, May 2002, pp. 1881-1884.Google Scholar
Deng, H. and Doroslovacki, M., Improving convergence of the PNLMS algorithm for sparse impulse response identification, IEEE Signal Processing Letters 12, pp. 181-184 (2005).Google Scholar
Hansler, E. and Schmidt, G., Acoustic Echo and Noise Control: A Practical Approach (Wiley, Hoboken, 2004).Google Scholar
Ligang, L., Fukumoto, M., and S. Saiki, An improved mu-law proportionate NLMS algorithm. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2008), 2008 pp. 3797-3800.Google Scholar
Werner, S., A. Apolinario Jr., J., S. R. Diniz, P., and T. I. Laakso, A set- membership approach to normalized proportionate adaptation algorithms. Proceeding of the European Signal Processing Conference (EUSIPCO 2005), Antalya, Turkey, September 2005, pp. 1-4.Google Scholar
S. Lima and P. S. R. Diniz, M. V., Steady-state MSE performance of the set- membership affine projection algorithm, Circuits, Systems, and Signal Processing 32, pp. 1811-1837 (2013).Google Scholar
Naylor, P. A., Cui, J., and M. Brookes, Adaptive algorithms for sparse echo cancellation, Signal Processing 86, pp. 1182-1192 (2006).Google Scholar
de Souza, F. C., Tobias, O. J., Seara, R., and D. R. Morgan, A PNLMS algorithm with individual activation factors, IEEE Transactions on Signal Processing 58, pp. 2036-2047 (2010).Google Scholar
de Souza, F. C., Seara, R., and D. R. Morgan, An enhanced IAF-PNLMS adaptive algorithm for sparse impulse response identification, IEEE Transactions on Signal Processing 60, pp. 3301-3307 (2012).Google Scholar
H. Khong and P. A. Naylor, A. W., Efficient use of sparse adaptive filters. Proceedings of the Fortieth Asilomar Conference on Signals, Systems and Computers (ACSSC 2006), 2006, pp. 1375-1379.Google Scholar
Paleologu, C., Ciochina, S., and J. Benesty, An efficient proportionate affine projection algorithm for echo cancellation, IEEE Signal Processing Letters 17, pp. 165-168 (2010).Google Scholar
Zheng, Z., Liu, Z., and Y. Dong, Steady-State and Tracking Analyses of the Improved Proportionate Affine Projection Algorithm, IEEE Transactions on Circuits and Systems II: Express Briefs 65, pp. 1793-1797 (2018).Google Scholar
Arablouei, R., Dogancay and S. Perreau, K., Proportionate affine projection algorithm with selective projections for sparse system identification. Proceedings of the Asia-Pacific Signal Information Processing Association Annual Summit Conference, 2010, pp. 362-366.Google Scholar
G. de Souza, J. V., Haddad, D. B., Henriques, F. R., and M. R. Petraglia, Novel proportionate adaptive filters with coefficient vector reusing, Circuits, Systems, and Signal Processing 39, pp. 2473-2488 (2020).Google Scholar
Werner, S., A. Apolinario Jr., J., and P. S. R. Diniz, Set-membership proportionate affine projection algorithms, EURASIP Journal on Audio, Speech, and Music Processing 2007, pp. 1-10 (2007).Google Scholar
Yazdanpanah, H., S. R. Diniz, P., and M. V. S. Lima, Improved simple set- membership affine projection algorithm for sparse system modelling: Analysis and implementation, IET Signal Processing 14, pp. 81-88 (2020).Google Scholar
Nose-Filho, K., Takahata, A. K., Lopes, R., and J. M. T. Romano, Improving sparse multichannel blind deconvolution with correlated seismic data: Foundations and further results, IEEE Signal Processing Magazine 35, pp. 41-50 (2018).Google Scholar
Kopsinis, Y., Slavakis, K., and S. Theodoridis, Online sparse system identification and signal reconstruction using projections onto weighted li balls, IEEE Transactions on Signal Processing 59, pp. 936-952 (2011).Google Scholar
Olinto, K. S., Haddad, D. B., and M. R. Petraglia, Transient analysis of 4,-LMS and ^o-NLMS algorithms, Signal Processing 127, pp. 217-226 (2016).Google Scholar
M. Pereyra et al., A survey of stochastic simulation and optimization methods in signal processing, IEEE Journal of Selected Topics in Signal Processing 10, pp. 224-241 (2016).Google Scholar
Chouvardas, S., Slavakis, K., Kopsinis, Y., and S. Theodoridis, A sparsity promoting adaptive algorithm for distributed learning, IEEE Transactions on Signal Processing 60, pp. 5412-5425 (2012).Google Scholar
Ferreira, T. N., V. S. Lima, M., Martins, W. A., and P. S. R. Diniz, Low- complexity proportionate algorithms with sparsity-promoting penalties. Proceedings of the 2016 IEEE International Symposium on Circuits and Systems, Montreal, Canada, May 2016, pp. 253-256.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×