Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-75dct Total loading time: 0 Render date: 2024-05-14T17:58:22.215Z Has data issue: false hasContentIssue false

4 - Sparse Optimization

from Part II - Methodology and Mathematical Background

Published online by Cambridge University Press:  18 May 2017

Zhu Han
Affiliation:
University of Houston
Mingyi Hong
Affiliation:
Iowa State University
Dan Wang
Affiliation:
Hong Kong Polytechnic University
Get access

Summary

This chapter reviews a collection of sparse optimization models and algorithms. The review focuses on introducing these algorithms along with their motivations and basic properties. The mathematical tools from the previous section will be heavily replied upon.

This chapter is organized as follows. Section 4.1 reviews a list of (convex) sparse optimization models, which deal with different types of signals and different kinds of noise; they can also include additional features as objectives and constraints that arise in practical problems. Section 4.2 demonstrates that the convex sparse optimization problems can be transformed in equivalent cone programs and solved by off-the-shelf algorithms; yet it argues that these algorithms are usually inefficient for large-scale problems. Sections 4.3–4.12 cover a large variety of algorithms for sparse optimization. These algorithms have different strengths and fit different applications. The efficiency of these algorithms, for the most part, comes from the use of the often closed-form shrinkage-like operations, which is introduced in Section 4.4. Then, Section 4.5 presents a prox-linear framework and gives several algorithms under this framework that are based on gradient descent and take advantages of shrinkage-like operations. These algorithms can often be linked with the BSUM framework discussed in Chapter 3. Duality is a very powerful tool in modern convex optimization; sparse optimization is not an exception. Section 4.6 derives a few dual models and algorithms for sparse optimization and discusses their properties. One class of dual algorithms (i.e., the ADMM algorithm introduced in Chapter 3) is very efficient and extremely versatile, applicable to nearly all convex sparse optimization problems. The framework and applications of these algorithms are given in Section 4.8. Unlike other algorithms, the homotopy algorithms in Section 4.9 produces not just one solution but a path of solutions for the LASSO model (which is given in (4.2b) below); not only are they efficient at producing multiple solutions corresponding to different parameter values, they are especially fast if the solution is extremely sparse. All the above algorithms can be (often significantly) accelerated by appropriately setting their stepsize parameters that determine the amount of progress at each iteration. Such techniques are reviewed in Section 4.10. Unlike all previous algorithms, greedy algorithms do not necessarily correspond to an optimization model; instead of systematically searching for solutions that minimize objective functions, they build sparse solutions by constructing their supports step by step in a greedy manner.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Sparse Optimization
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.004
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Sparse Optimization
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.004
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Sparse Optimization
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.004
Available formats
×