To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
We provide a short, self-contained introduction to deep neural networks that is aimed at mathematically inclined readers. We promote the use of a vect--matrix formalism that is well suited to the compositional structure of these networks and that facilitates the derivation/description of the backpropagation algorithm. We present a detailed analysis of supervised learning for the two most common scenarios, (i) multivariate regression and (ii) classification, which rely on the minimization of least squares and cross-entropy criteria, respectively.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Since the groundbreaking performance improvement by AlexNet at the ImageNet challenge, deep learning has provided significant gains over classical approaches in various fields of data science including imaging reconstruction. The availability of large-scale training datasets and advances in neural network research have resulted in the unprecedented success of deep learning in various applications. Nonetheless, the success of deep learning appears very mysterious. The basic building blocks of deep neural networks are convolution, pooling, and nonlinearity, which are primitive tools of mathematics. Interestingly, the cascaded connection of these primitive tools results in superior performance over traditional approaches. To understand this mystery, one can go back to the basic ideas of the classical approaches to understand the similarities and differences from modern deep-neural-network methods. In this chapter, we explain the limitations of the classical machine learning approaches, and provide a review of mathematical foundations to understand why deep neural networks have successfully overcome their limitations.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Inspired by the success of deep learning in computer vision tasks, deep learning approaches for various MRI problems have been extensively studied in recent years. Early deep learning studies for MRI reconstruction and enhancement were mostly based on image-domain learning. However, because the MR signal is acquired in the k-space domain, researchers have demonstrated that deep neural networks can be directly designed in k-space to utilize the physics of MR acquisition. In this chapter, the recent trend of k-space deep learning for MRI reconstruction and artifact removal are reviewed. First, scan-specific k-space learning, which is inspired by parallel MRI, is covered. Then we provide an overview of data-driven k-space learning. Subsequently, unsupervised learning for MRI reconstruction and motion artifact removal are discussed.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne