Skip to main content Accessibility help
×
Home
Hostname: page-component-59b7f5684b-ns2hh Total loading time: 0.359 Render date: 2022-10-04T14:43:31.292Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "displayNetworkTab": true, "displayNetworkMapGraph": false, "useSa": true } hasContentIssue true

Limits and limitations of no-regret learning in games

Published online by Cambridge University Press:  13 October 2017

Barnabé Monnot
Affiliation:
Singapore University of Technology and Design, Engineering Systems & Design Pillar, 8 Somapah Road, Singapore 487372 e-mail: monnot_barnabe@mymail.sutd.edu.sg, georgios@sutd.edu.sg
Georgios Piliouras
Affiliation:
Singapore University of Technology and Design, Engineering Systems & Design Pillar, 8 Somapah Road, Singapore 487372 e-mail: monnot_barnabe@mymail.sutd.edu.sg, georgios@sutd.edu.sg

Abstract

We study the limit behavior and performance of no-regret dynamics in general game theoretic settings. We design protocols that achieve both good regret and equilibration guarantees in general games. We also establish a strong equivalence between them and coarse correlated equilibria (CCE). We examine structured game settings where stronger properties can be established for no-regret dynamics and CCE. In congestion games with non-atomic agents (each contributing a fraction of the flow), as we decrease the individual flow of agents, CCE become closely concentrated around the unique equilibrium flow of the non-atomic game. Moreover, we compare best/worst case no-regret learning behavior to best/worst case Nash equilibrium (NE) in small games. We prove analytical bounds on these inefficiency ratios for 2×2 games and unboundedness for larger games. Experimentally, we sample normal form games and compute their measures of inefficiency. We show that the ratio distribution has sharp decay, in the sense that most generated games have small ratios. They also exhibit strong anti-correlation between each other, that is games with large improvements from the best NE to the best CCE present small degradation from the worst NE to the worst CCE.

Type
Adaptive and Learning Agents
Copyright
© Cambridge University Press, 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anshelevich, E., Dasgupta, A., Kleinberg, J., Tardos, É., Wexler, T. & Roughgarden, T. 2004. The price of stability for network design with fair cost allocation. In Foundations of Computer Science (FOCS), 295–304. IEEE.Google Scholar
Ashlagi, I., Monderer, D. & Tennenholtz, M. 2008. On the value of correlation. Journal of Artificial Intelligence Research 33, 575613.Google Scholar
Aumann, R. J. 1974. Subjectivity and correlation in randomized strategies. Journal of mathematical Economics 1(1), 6796.CrossRefGoogle Scholar
Aumann, R. J. & Hart, S. 2003. Long cheap talk. Econometrica 71(6), 16191660.CrossRefGoogle Scholar
Barman, S. & Ligett, K. 2015. Finding any nontrivial coarse correlated equilibrium is hard. In ACM Conference on Economics and Computation (EC).Google Scholar
Blum, A., Even-Dar, E. & Ligett, K. 2010. Routing without regret: on convergence to Nash equilibria of regret-minimizing algorithms in routing games. Theory of Computing 6(1), 179199.CrossRefGoogle Scholar
Blum, A., Hajiaghayi, M., Ligett, K. & Roth, A. 2008. Regret minimization and the price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, 373–382. ACM.Google Scholar
Bowling, M. 2005. Convergence and no-regret in multiagent learning. Advances in Neural Information Processing Systems 17, 209216.Google Scholar
Bradonjic, M., Ercal-Ozkaya, G., Meyerson, A. & Roytman, A. 2009. On the price of mediation. In Proceedings of the 10th ACM Conference on Electronic Commerce, 315–324. ACM.Google Scholar
Brafman, R. I. & Tennenholtz, M. 2004. Efficient learning equilibrium. Artificial Intelligence 159(1), 2747.CrossRefGoogle Scholar
Conitzer, V. & Sandholm, T. 2007. Awesome: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Machine Learning 67(1–2), 2343.CrossRefGoogle Scholar
Daskalakis, C., Goldberg, P. W. & Papadimitriou, C. H. 2009. The complexity of computing a Nash equilibrium. SIAM Journal on Computing 39(1), 195259.CrossRefGoogle Scholar
Foster, D. P. & Vohra, R. V. 1997. Calibrated learning and correlated equilibrium. Games and Economic Behavior 21(1), 4055.CrossRefGoogle Scholar
Friedman, J. W. 1971. A non-cooperative equilibrium for supergames. The Review of Economic Studies 38(1), 112.CrossRefGoogle Scholar
Greenwald, A. & Jafari, A. 2003. A general class of no-regret learning algorithms and game-theoretic equilibria. In Learning Theory and Kernel Machines, 2–12. Springer.Google Scholar
Hart, S. & Mansour, Y. 2007. The communication complexity of uncoupled Nash equilibrium procedures. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, 345–353. ACM.Google Scholar
Hoeffding, W. 1963. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 58(301), 1330.CrossRefGoogle Scholar
Kleinberg, R., Piliouras, G. & Tardos, É. 2009. Multiplicative updates outperform generic no-regret learning in congestion games. In ACM Symposium on Theory of Computing (STOC).Google Scholar
Kleinberg, R., Piliouras, G. & Tardos, É. 2011. Load balancing without regret in the bulletin board model. Distributed Computing 24(1), 2129.CrossRefGoogle Scholar
Koutsoupias, E. & Papadimitriou, C. H. 1999. Worst-case equilibria. In STACS, 404–413.Google Scholar
Littman, M. L. & Stone, P. 2005. A polynomial-time Nash equilibrium algorithm for repeated games. Decision Support Systems 39(1), 5566.CrossRefGoogle Scholar
Nash, J. 1951. Non-cooperative games. Annals of Mathematics 54, 286295.CrossRefGoogle Scholar
Palaiopanos, G., Panageas, I. & Piliouras, G. 2017. Multiplicative weights update with constant step-size in congestion games: convergence, limit cycles and chaos. CoRR, abs/1703.01138, http://arxiv.org/abs/1703.01138.Google Scholar
Roughgarden, T. 2009. Intrinsic robustness of the price of anarchy. In Proceedings of STOC, 513–522.Google Scholar
Sandholm, W. H. 2010. Population Games and Evolutionary Dynamics. MIT press.Google Scholar
Shoham, Y., Powers, R. & Grenager, T. 2007. If multi-agent learning is the answer, what is the question? Artificial Intelligence 171(7), 365377.CrossRefGoogle Scholar
Young, H. 2004. Strategic Learning and Its Limits. Arne Ryde memorial lectures, Oxford University Press. https://books.google.fr/books?id=3oUBoQEACAAJ.Google Scholar
2
Cited by

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Limits and limitations of no-regret learning in games
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Limits and limitations of no-regret learning in games
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Limits and limitations of no-regret learning in games
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *