Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-03-27T14:39:25.699Z Has data issue: false hasContentIssue false

Q-Table compression for reinforcement learning

Published online by Cambridge University Press:  04 December 2018

Leonardo Amado
Affiliation:
Pontifical Catholic University of Rio Grande do Sul, Av. Ipiranga 6681, Porto Alegre, RS, 90619-900, Brazil; e-mail leonardo.amado@acad.pucrs.br, felipe.meneguzzi@pucrs.br
Felipe Meneguzzi
Affiliation:
Pontifical Catholic University of Rio Grande do Sul, Av. Ipiranga 6681, Porto Alegre, RS, 90619-900, Brazil; e-mail leonardo.amado@acad.pucrs.br, felipe.meneguzzi@pucrs.br

Abstract

Reinforcement learning (RL) algorithms are often used to compute agents capable of acting in environments without prior knowledge of the environment dynamics. However, these algorithms struggle to converge in environments with large branching factors and their large resulting state-spaces. In this work, we develop an approach to compress the number of entries in a Q-value table using a deep auto-encoder. We develop a set of techniques to mitigate the large branching factor problem. We present the application of such techniques in the scenario of a real-time strategy (RTS) game, where both state space and branching factor are a problem. We empirically evaluate an implementation of the technique to control agents in an RTS game scenario where classical RL fails and provide a number of possible avenues of further work on this problem.

Information

Type
Special Issue Contribution
Copyright
© Cambridge University Press, 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable