Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Executive Summary
- 14 Sparse Recovery from Linear Observations
- 15 The Complexity of Sparse Recovery
- 16 Low-Rank Recovery from Linear Observations
- 17 Sparse Recovery from One-Bit Observations
- 18 Group Testing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
17 - Sparse Recovery from One-Bit Observations
from Part Three - Compressive Sensing
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Executive Summary
- 14 Sparse Recovery from Linear Observations
- 15 The Complexity of Sparse Recovery
- 16 Low-Rank Recovery from Linear Observations
- 17 Sparse Recovery from One-Bit Observations
- 18 Group Testing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
This chapter returns to the recovery of sparse vectors, but this time the linear measurements are quantized to retain only their signs. With the help of the restricted isometry property from ?2 to ?1, it is shown that the direction of sparse vectors can still be approximately recovered via a hard thresholding procedure or via a linear program. Furthermore, it is shown that the magnitude, too, can be recovered if an appropriate modification of the signed observations is allowed.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 139 - 148Publisher: Cambridge University PressPrint publication year: 2022