Skip to content
Register Sign in Wishlist

The Principles of Deep Learning Theory
An Effective Theory Approach to Understanding Neural Networks

£59.99

  • Date Published: May 2022
  • availability: Available
  • format: Hardback
  • isbn: 9781316519332

£ 59.99
Hardback

Add to cart Add to wishlist

Other available formats:
eBook


Looking for an inspection copy?

This title is not currently available on inspection

Description
Product filter button
Description
Contents
Resources
Courses
About the Authors
  • This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.

    • Detailed step-by-step explanations for all equations and clear exposition of both old and new concepts in deep learning theory make the book accessible to readers with a minimal prerequisite of linear algebra, calculus, and informal probability theory
    • Many novel results that appear for the first time in the literature, taking readers to the forefront of deep learning theory
    • Provides a unique approach that bridges deep learning and theoretical physics, demonstrating to the ML community how a theoretical physics approach can be useful, while also teaching techniques that are valuable for theoretical physicists
    Read more

    Reviews & endorsements

    'In the history of science and technology, the engineering artifact often comes first: the telescope, the steam engine, digital communication. The theory that explains its function and its limitations often appears later: the laws of refraction, thermodynamics, and information theory. With the emergence of deep learning, AI-powered engineering wonders have entered our lives — but our theoretical understanding of the power and limits of deep learning is still partial. This is one of the first books devoted to the theory of deep learning, and lays out the methods and results from recent theoretical approaches in a coherent manner.' Yann LeCun, New York University and Chief AI Scientist at Meta

    'For a physicist, it is very interesting to see deep learning approached from the point of view of statistical physics. This book provides a fascinating perspective on a topic of increasing importance in the modern world.' Edward Witten, Institute for Advanced Study

    'This is an important book that contributes big, unexpected new ideas for unraveling the mystery of deep learning's effectiveness, in unusually clear prose. I hope it will be read and debated by experts in all the relevant disciplines.' Scott Aaronson, University of Texas at Austin

    'It is not an exaggeration to say that the world is being revolutionized by deep learning methods for AI. But why do these deep networks work? This book offers an approach to this problem through the sophisticated tools of statistical physics and the renormalization group. The authors provide an elegant guided tour of these methods, interesting for experts and non-experts alike. They write with clarity and even moments of humor. Their results, many presented here for the first time, are the first steps in what promises to be a rich research program, combining theoretical depth with practical consequences.' William Bialek, Princeton University

    'This book's physics-trained authors have made a cool discovery, that feature learning depends critically on the ratio of depth to width in the neural net.' Gilbert Strang, Massachusetts Institute of Technology

    'An excellent resource for graduate students focusing on neural networks and machine learning … Highly recommended.' J. Brzezinski, Choice

    'The book is a joy and a challenge to read at the same time. … The joy is in gaining a much deeper understanding of deep learning (pun intended) and in savoring the authors' subtle humor, with physics undertones. … In a field where research and practice largely overlap, this is an important book for any professional.' Bogdan Hoanca, Optics and Photonics News

    See more reviews

    Customer reviews

    Not yet reviewed

    Be the first to review

    Review was not posted due to profanity

    ×

    , create a review

    (If you're not , sign out)

    Please enter the right captcha value
    Please enter a star rating.
    Your review must be a minimum of 12 words.

    How do you rate this item?

    ×

    Product details

    • Date Published: May 2022
    • format: Hardback
    • isbn: 9781316519332
    • length: 472 pages
    • dimensions: 261 x 184 x 26 mm
    • weight: 1.06kg
    • availability: Available
  • Table of Contents

    Preface
    0. Initialization
    1. Pretraining
    2. Neural networks
    3. Effective theory of deep linear networks at initialization
    4. RG flow of preactivations
    5. Effective theory of preactivations at initializations
    6. Bayesian learning
    7. Gradient-based learning
    8. RG flow of the neural tangent kernel
    9. Effective theory of the NTK at initialization
    10. Kernel learning
    11. Representation learning
    ∞. The end of training
    ε. Epilogue
    A. Information in deep learning
    B. Residual learning
    References
    Index.

  • Resources for

    The Principles of Deep Learning Theory

    Daniel A. Roberts, Sho Yaida

    General Resources

    Find resources associated with this title

    Type Name Unlocked * Format Size

    Showing of

    Back to top

    This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to lecturers whose faculty status has been verified. To gain access to locked resources, lecturers should sign in to or register for a Cambridge user account.

    Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other lecturers may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.

    Supplementary resources are subject to copyright. Lecturers are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.

    If you are having problems accessing these resources please contact lecturers@cambridge.org.

  • Authors

    Daniel A. Roberts, Massachusetts Institute of Technology
    Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities.

    Sho Yaida, Meta AI
    Sho Yaida is a research scientist at Meta AI. Prior to joining Meta AI, he obtained his PhD in physics at Stanford University and held postdoctoral positions at MIT and at Duke University. At Meta AI, he uses tools from theoretical physics to understand neural networks, the topic of this book.

    With contributions by

    Boris Hanin, Princeton University, New Jersey
    Boris Hanin is an Assistant Professor at Princeton University in the Operations Research and Financial Engineering Department. Prior to joining Princeton in 2020, Boris was an Assistant Professor at Texas A&M in the Math Department and an NSF postdoc at MIT. He has taught graduate courses on the theory and practice of deep learning at both Texas A&M and Princeton.

Screen shot of the MetaAI blog post showing Sho Yaida's article

Media coverage of the book courtsey of SiliconANGLE

Related Books

Sorry, this resource is locked

Please register or sign in to request access. If you are having problems accessing these resources please email lecturers@cambridge.org

Register Sign in
Please note that this file is password protected. You will be asked to input your password on the next screen.

» Proceed

You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.

Continue ×

Continue ×

Continue ×
warning icon

Turn stock notifications on?

You must be signed in to your Cambridge account to turn product stock notifications on or off.

Sign in Create a Cambridge account arrow icon
×

Find content that relates to you

Join us online

This site uses cookies to improve your experience. Read more Close

Are you sure you want to delete your account?

This cannot be undone.

Cancel

Thank you for your feedback which will help us improve our service.

If you requested a response, we will make sure to get back to you shortly.

×
Please fill in the required fields in your feedback submission.
×