In the first five chapters of this book, we introduced two main families of low-dimensional models for high-dimensional data: sparse models and low-rank models. In Chapter 5, we saw how we could combine these basic models to accommodate data matrices that are superpositions of sparse and low-rank matrices. This generalization allowed us to model richer classes of data, including data containing erroneous observations. In this chapter, we further generalize these basic models to a situation in which the object of interest consists of a superposition of a few elements from some set of “atoms” (Section 6.1). This construction is general enough to include all of the models discussed so far, as well as several other models of practical importance. With this general idea in mind, we then discuss unified approaches to studying the power of low-dimensional signal models for estimation, measured in terms of the number of measurements needed for exact recovery or recovery with sparse errors (Section 6.2). These analyses generalize and unify the ideas developed over the earlier chapters, and offer definitive results on the power of convex relaxation. Finally, in Section 6.3, we discuss limitations of convex relaxation, which in some situations will force us to consider nonconvex alternatives, to be studied in later chapters.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.