Skip to content
Register Sign in Wishlist
Bandit Algorithms

Bandit Algorithms

c.$49.99 ( )

  • Publication planned for: September 2020
  • availability: Not yet published - available from September 2020
  • format: Hardback
  • isbn: 9781108486828

c.$ 49.99 ( )

Pre-order Add to wishlist

Looking for an examination copy?

If you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact providing details of the course you are teaching.

Product filter button
About the Authors
  • Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.

    • Over 350 exercises, many with solutions, provide an excellent resource for a graduate course or self-study
    • Presents mathematical rigor as well as the explanations behind these techniques and approaches
    • This self-contained text is both broad and complete
    Read more

    Customer reviews

    Not yet reviewed

    Be the first to review

    Review was not posted due to profanity


    , create a review

    (If you're not , sign out)

    Please enter the right captcha value
    Please enter a star rating.
    Your review must be a minimum of 12 words.

    How do you rate this item?


    Product details

    • Publication planned for: September 2020
    • format: Hardback
    • isbn: 9781108486828
    • dimensions: 247 x 174 mm
    • availability: Not yet published - available from September 2020
  • Table of Contents

    1. Introduction
    2. Foundations of Probability
    3. Stochastic Processes and Markov Chains
    4. Finite-Armed Stochastic Bandits
    5. Concentration of Measure
    6. The Explore-then-Commit Algorithm
    7. The Upper Confidence Bound Algorithm
    8. The Upper Confidence Bound Algorithm: Asymptotic Optimality
    9. The Upper Confidence Bound Algorithm: Minimax Optimality
    10. The Upper Confidence Bound Algorithm: Bernoulli Noise
    11. The Exp3 Algorithm
    12. The Exp3-IX Algorithm
    13. Lower Bounds: Basic Ideas
    14. Foundations of Information Theory
    15. Minimax Lower Bounds
    16. Asymptotic and Instance Dependent Lower Bounds
    17. High Probability Lower Bounds
    18. Contextual Bandits
    19. Stochastic Linear Bandits
    20. Confidence Bounds for Least Squares Estimators
    21. Optimal Design for Least Squares Estimators
    22. Stochastic Linear Bandits with Finitely Many Arms
    23. Stochastic Linear Bandits with Sparsity
    24. Minimax Lower Bounds for Stochastic Linear Bandits
    25. Asymptotic Lower Bounds for Stochastic Linear Bandits
    26. Foundations of Convex Analysis
    27. Exp3 for Adversarial Linear Bandits
    28. Follow the Regularized Leader and Mirror Descent
    29. The Relation Between Adversarial and Stochastic Linear Bandits
    30. Combinatorial Bandits
    31. Non-Stationary Bandits
    32. Ranking
    33. Pure Exploration
    34. Foundations of Bayesian Learning
    35. Bayesian Bandits
    36. Thompson Sampling
    37. Partial Monitoring
    38. Markov Decision Processes.

  • Authors

    Tor Lattimore, University of Alberta
    Tor Lattimore is a research scientist at DeepMind. His research is focused on decision making in the face of uncertainty, including bandit algorithms and reinforcement learning. Before joining DeepMind he was an assistant professor at Indiana University and a postdoctoral fellow at the University of Alberta.

    Csaba Szepesvári, University of Alberta
    Csaba Szepesvári is a Professor at the Department of Computing Science of the University of Alberta and a Principal Investigator of the Alberta Machine Intelligence Institute. He also leads the “Foundations” team at DeepMind. He has co-authored a book on nonlinear approximate adaptive controllers and authored a book on reinforcement learning, in addition to publishing over 200 journal and conference papers. He is an action editor of the Journal of Machine Learning Research.

Sign In

Please sign in to access your account


Not already registered? Create an account now. ×

Sorry, this resource is locked

Please register or sign in to request access. If you are having problems accessing these resources please email

Register Sign in
Please note that this file is password protected. You will be asked to input your password on the next screen.

» Proceed

You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner Please see the permission section of the catalogue page for details of the print & copy limits on our eBooks.

Continue ×

Continue ×

Continue ×

Find content that relates to you

Join us online

This site uses cookies to improve your experience. Read more Close

Are you sure you want to delete your account?

This cannot be undone.


Thank you for your feedback which will help us improve our service.

If you requested a response, we will make sure to get back to you shortly.

Please fill in the required fields in your feedback submission.