Skip to main content
×
Home
    • Aa
    • Aa

ON THE IDENTIFICATION AND MITIGATION OF WEAKNESSES IN THE KNOWLEDGE GRADIENT POLICY FOR MULTI-ARMED BANDITS

  • James Edwards (a1), Paul Fearnhead (a2) and Kevin Glazebrook (a3)
Abstract

The knowledge gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision-making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic, which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not take dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.

Copyright
References
Hide All
1. D.A. Berry & B. Fristedt (1985). Bandit Problems. London: Chapman and Hall.

2. M. Brezzi & T.L. Lai (2002). Optimal learning and experimentation in bandit problems. Journal of Economic Dynamics and Control 27(1): 87108.

3. S.E. Chick & N. Gans (2009). Economic analysis of simulation selection problems. Management Science 55(3): 421437.

5. P.I. Frazier , W.B. Powell , & S. Dayanik (2008). A knowledge-gradient policy for sequential information collection. SIAM Journal on Control and Optimization 47(5): 24102439.

6. P.I. Frazier , W.B. Powell , & S. Dayanik (2009). The knowledge-gradient policy for correlated normal beliefs. INFORMS Journal on Computing 21(4): 599613.

7. J.C. Gittins , K.D. Glazebrook , & R. Weber (2011). Multi-armed Bndit Allocation Indices, 2nd ed. Chichester, UK: John Wiley & Sons.

8. S.S. Gupta & K.J. Miescke (1996). Bayesian look ahead one-stage sampling allocations for selection of the best population. Journal of Statistical Planning and Inference 54(2): 229244.

9. D.R. Jones , M. Schonlau , & W.J. Welch (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13(4): 455492.

10. W.B. Powell & I.O. Ryzhov (2012). Optimal Learning. Hoboken, NJ: John Wiley & Sons.

11. D. Russo & B. Van Roy (2014). Learning to optimize via posterior sampling. Mathematics of Operations Research 39(4): 12211243.

12. I.O. Ryzhov , P.I. Frazier , & W.B. Powell (2010). On the robustness of a one-period look-ahead policy in multi-armed bandit problems. Procedia Computer Science 1(1): 16351644.

14. I.O. Ryzhov , W.B. Powell , & P.I. Frazier (2012). The knowledge gradient algorithm for a general class of online learning problems. Operations Research 60(1): 180195.

15. M. Shaked & J.G. Shanthikumar (2007). Stochastic Orders. New York: Springer.

16. R. Weber (1992). On the Gittins index for multiarmed bandits. The Annals of Applied Probability 2(4): 10241033.

18. P. Whittle (1988). Restless bandits: Activity allocation in a changing world. Journal of Applied Probability 25: 287298.

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Probability in the Engineering and Informational Sciences
  • ISSN: 0269-9648
  • EISSN: 1469-8951
  • URL: /core/journals/probability-in-the-engineering-and-informational-sciences
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×

Keywords:

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 32 *
Loading metrics...

Abstract views

Total abstract views: 230 *
Loading metrics...

* Views captured on Cambridge Core between 13th September 2016 - 19th October 2017. This data will be updated every 24 hours.