Skip to main content


  • James Edwards (a1), Paul Fearnhead (a2) and Kevin Glazebrook (a3)

The knowledge gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision-making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic, which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not take dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.

Hide All
1. Berry, D.A. & Fristedt, B. (1985). Bandit Problems. London: Chapman and Hall.
2. Brezzi, M. & Lai, T.L. (2002). Optimal learning and experimentation in bandit problems. Journal of Economic Dynamics and Control 27(1): 87108.
3. Chick, S.E. & Gans, N. (2009). Economic analysis of simulation selection problems. Management Science 55(3): 421437.
4. Ding, Z. & Ryzhov, I.O. (2016). Optimal learning with non-Gaussian rewards. Advances in Applied Probability 1(48): 112136.
5. Frazier, P.I., Powell, W.B., & Dayanik, S. (2008). A knowledge-gradient policy for sequential information collection. SIAM Journal on Control and Optimization 47(5): 24102439.
6. Frazier, P.I., Powell, W.B., & Dayanik, S. (2009). The knowledge-gradient policy for correlated normal beliefs. INFORMS Journal on Computing 21(4): 599613.
7. Gittins, J.C., Glazebrook, K.D., & Weber, R. (2011). Multi-armed Bndit Allocation Indices, 2nd ed. Chichester, UK: John Wiley & Sons.
8. Gupta, S.S. & Miescke, K.J. (1996). Bayesian look ahead one-stage sampling allocations for selection of the best population. Journal of Statistical Planning and Inference 54(2): 229244.
9. Jones, D.R., Schonlau, M., & Welch, W.J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13(4): 455492.
10. Powell, W.B. & Ryzhov, I.O. (2012). Optimal Learning. Hoboken, NJ: John Wiley & Sons.
11. Russo, D. & Van Roy, B. (2014). Learning to optimize via posterior sampling. Mathematics of Operations Research 39(4): 12211243.
12. Ryzhov, I.O., Frazier, P.I., & Powell, W.B. (2010). On the robustness of a one-period look-ahead policy in multi-armed bandit problems. Procedia Computer Science 1(1): 16351644.
13. Ryzhov, I.O. & Powell, W.B. (2011). The value of information in multi-armed bandits with exponentially distributed rewards. In Proceedings of the 2011 International Conference on Computational Science, pp. 13631372.
14. Ryzhov, I.O., Powell, W.B., & Frazier, P.I. (2012). The knowledge gradient algorithm for a general class of online learning problems. Operations Research 60(1): 180195.
15. Shaked, M. & Shanthikumar, J.G. (2007). Stochastic Orders. New York: Springer.
16. Weber, R. (1992). On the Gittins index for multiarmed bandits. The Annals of Applied Probability 2(4): 10241033.
17. Whittle, P. (1980). Multi-armed bandits and the Gittins index. Journal of the Royal Statistical Society. Series B (Methodological) 42(2): 143149.
18. Whittle, P. (1988). Restless bandits: Activity allocation in a changing world. Journal of Applied Probability 25: 287298.
19. Yu, Y. (2011). Structural properties of Bayesian bandits with exponential family distributions. arXiv preprint. arXiv:1103.3089v1.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Probability in the Engineering and Informational Sciences
  • ISSN: 0269-9648
  • EISSN: 1469-8951
  • URL: /core/journals/probability-in-the-engineering-and-informational-sciences
Please enter your name
Please enter a valid email address
Who would you like to send this to? *



Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 55 *
Loading metrics...

Abstract views

Total abstract views: 348 *
Loading metrics...

* Views captured on Cambridge Core between 13th September 2016 - 18th July 2018. This data will be updated every 24 hours.