Control Systems and Reinforcement Learning
£49.99
- Author: Sean Meyn, University of Florida
- Date Published: June 2022
- availability: In stock
- format: Hardback
- isbn: 9781316511961
£
49.99
Hardback
Other available formats:
eBook
Looking for an inspection copy?
This title is not currently available on inspection
-
A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
Read more- Presents optimal control as an accessible path to understanding the goals and behavior of current reinforcement learning techniques
- Focuses on the ODE method to provide a large toolbox for algorithm design, methods to estimate the speed of learning, and insight as to why algorithms sometimes fail
- Contains summaries of most reinforcement learning algorithms, and worked examples to guide the choice of 'meta parameters' that appear in each of these recursive algorithms
- Over 100 exercises - theoretical and computational - illustrate key concepts and applications
Reviews & endorsements
'Control Systems and Reinforcement Learning is a densely packed book with a vivid, conversational style. It speaks both to computer scientists interested in learning about the tools and techniques of control engineers and to control engineers who want to learn about the unique challenges posed by reinforcement learning and how to address these challenges. The author, a world-class researcher in control and probability theory, is not afraid of strong and perhaps controversial opinions, making the book entertaining and attractive for open-minded readers. Everyone interested in the "why" and "how" of RL will use this gem of a book for many years to come.' Csaba Szepesvári, Canada CIFAR AI Chair, University of Alberta, and Head of the Foundations Team at DeepMind
See more reviews'This book is a wild ride, from the elements of control through to bleeding-edge topics in reinforcement learning. Aimed at graduate students and very good undergraduates who are willing to invest some effort, the book is a lively read and an important contribution.' Shane G. Henderson, Charles W. Lake, Jr. Chair in Productivity, Cornell University
'Reinforcement learning, now the de facto workhorse powering most AI-based algorithms, has deep connections with optimal control and dynamic programing. Meyn explores these connections in a marvelous manner and uses them to develop fast, reliable iterative algorithms for solving RL problems. This excellent, timely book from a leading expert on stochastic optimal control and approximation theory is a must-read for all practitioners in this active research area.' Panagiotis Tsiotras, David and Andrew Lewis Chair and Professor, Guggenheim School of Aerospace Engineering, Georgia Institute of Technology
Customer reviews
Not yet reviewed
Be the first to review
Review was not posted due to profanity
×Product details
- Date Published: June 2022
- format: Hardback
- isbn: 9781316511961
- length: 450 pages
- dimensions: 260 x 180 x 26 mm
- weight: 1.04kg
- availability: In stock
Table of Contents
1. Introduction
Part I. Fundamentals Without Noise:
2. Control crash course
3. Optimal control
4. ODE methods for algorithm design
5. Value function approximations
Part II. Reinforcement Learning and Stochastic Control:
6. Markov chains
7. Stochastic control
8. Stochastic approximation
9. Temporal difference methods
10. Setting the stage, return of the actors
A. Mathematical background
B. Markov decision processes
C. Partial observations and belief states
References
Glossary of Symbols and Acronyms
Index.-
General Resources
Find resources associated with this title
Type Name Unlocked * Format Size Showing of
This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to lecturers whose faculty status has been verified. To gain access to locked resources, lecturers should sign in to or register for a Cambridge user account.
Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other lecturers may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.
Supplementary resources are subject to copyright. Lecturers are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.
If you are having problems accessing these resources please contact lecturers@cambridge.org.
Sorry, this resource is locked
Please register or sign in to request access. If you are having problems accessing these resources please email lecturers@cambridge.org
Register Sign in» Proceed
You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.
Continue ×Are you sure you want to delete your account?
This cannot be undone.
Thank you for your feedback which will help us improve our service.
If you requested a response, we will make sure to get back to you shortly.
×