Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-11T15:14:00.978Z Has data issue: false hasContentIssue false

Toll-based reinforcement learning for efficient equilibria in route choice

Published online by Cambridge University Press:  05 March 2020

Gabriel de O. Ramos
Affiliation:
Graduate Program in Applied Computing, Universidade do Vale do Rio dos Sinos, São Leopoldo, Brazil, e-mail: gdoramos@unisinos.br Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium, e-mails: roxana@ai.vub.ac.be, ann.nowe@ai.vub.ac.be
Bruno C. Da Silva
Affiliation:
Instituto de Informática, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, e-mails: bsilva@inf.ufrgs.br, bazzan@inf.ufrgs.br
Roxana Rădulescu
Affiliation:
Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium, e-mails: roxana@ai.vub.ac.be, ann.nowe@ai.vub.ac.be
Ana L. C. Bazzan
Affiliation:
Instituto de Informática, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, e-mails: bsilva@inf.ufrgs.br, bazzan@inf.ufrgs.br
Ann Nowé
Affiliation:
Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium, e-mails: roxana@ai.vub.ac.be, ann.nowe@ai.vub.ac.be

Abstract

The problem of traffic congestion incurs numerous social and economical repercussions and has thus become a central issue in every major city in the world. For this work we look at the transportation domain from a multiagent system perspective, where every driver can be seen as an autonomous decision-making agent. We explore how learning approaches can help achieve an efficient outcome, even when agents interact in a competitive environment for sharing common resources. To this end, we consider the route choice problem, where self-interested drivers need to independently learn which routes minimise their expected travel costs. Such a selfish behaviour results in the so-called user equilibrium, which is inefficient from the system’s perspective. In order to mitigate the impact of selfishness, we present Toll-based Q-learning (TQ-learning, for short). TQ-learning employs the idea of marginal-cost tolling (MCT), where each driver is charged according to the cost it imposes on others. The use of MCT leads agents to behave in a socially desirable way such that the is attainable. In contrast to previous works, however, our tolling scheme is distributed (i.e., each agent can compute its own toll), is charged a posteriori (i.e., at the end of each trip), and is fairer (i.e., agents pay exactly their marginal costs). Additionally, we provide a general formulation of the toll values for univariate, homogeneous polynomial cost functions. We present a theoretical analysis of TQ-learning, proving that it converges to a system-efficient equilibrium (i.e., an equilibrium aligned to the system optimum) in the limit. Furthermore, we perform an extensive empirical evaluation on realistic road networks to support our theoretical findings, showing that TQ-learning indeed converges to the optimum, which translates into a reduction of the congestion levels by 9.1%, on average.

Information

Type
Adaptive and Learning Agents
Copyright
© Cambridge University Press 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable