Published online by Cambridge University Press: 05 October 2015
In this book, we have presented in detail how sparsity can be used to regularize a variety of inverse problems in signal, image, and more generally data processing. We have shown that sparse recovery involves four main ingredients that are the keys its its success. These ingredients are
▪ The dictionary: fixed dictionaries with fast analysis and synthesis operators, such as X-lets (including wavelets or curvelets described in Chapters 3, 5, 11, and 12), allow us to build very efficient algorithms, both in terms of computation time and of quality of the recovered solutions. We have seen in Chapter 8 that these fixed dictionaries can be gathered together to build a larger dictionary, but still using fast operators. To go beyond these fixed dictionaries, dictionary learning in Chapter 10 appears to be a promising paradigm to learn dictionaries adapted to the analyzed data. We showed its efficiency in astronomical image denoising and how it exceeds the performance of state-of-the-art denoising algorithms that use nonadaptive dictionaries.
▪ The noise modeling: having the right dictionary is only a part of the story and is not enough. In most inverse problems, one needs to disentangle the useful signal/information from the noise. This requires taking properly into account the noise behavior in the observed data. In a denoising setting, we have seen in Chapter 6 how this can be achieved for different kinds of noise models, such as Gaussian, Poisson, mixture of Gaussian and Poisson, and correlated noise, and can be used to derive the distribution of the coefficients and therefore to detect, with a given confidence level, significant coefficients that should be used to reconstruct the signal of interest.
▪ The variational problem and minimization algorithm: piecing together the first ingredient and the second one (i.e., regularization and data fidelity), solving the inverse problem at hand becomes solving a variational (optimization) problem. We have shown throughout the book how this can be achieved for several verse problems (e.g., Chapters 7, 8, and 9). To solve the obtained optimization problem, we described in Chapter 7 a very general class of optimization schemes, namely proximal splitting algorithms, that are fast, are easy to implement, and enjoy provable convergence.
To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.