Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-10T11:35:36.903Z Has data issue: false hasContentIssue false

Reinforcement learning with modified exploration strategy for mobile robot path planning

Published online by Cambridge University Press:  11 May 2023

Nesrine Khlif*
Affiliation:
Laboratory of Robotics, Informatics and Complex Systems (RISC lab - LR16ES07), ENIT, University of Tunis EL Manar, Le BELVEDERE, Tunis, Tunisia
Khraief Nahla
Affiliation:
Laboratory of Robotics, Informatics and Complex Systems (RISC lab - LR16ES07), ENIT, University of Tunis EL Manar, Le BELVEDERE, Tunis, Tunisia
Belghith Safya
Affiliation:
Laboratory of Robotics, Informatics and Complex Systems (RISC lab - LR16ES07), ENIT, University of Tunis EL Manar, Le BELVEDERE, Tunis, Tunisia
*
Corresponding author: Nesrine Khlif; Email: nesrine.khlif@etudiant-enit.utm.tn

Abstract

Driven by the remarkable developments we have observed in recent years, path planning for mobile robots is a difficult part of robot navigation. Artificial intelligence applied to mobile robotics is also a distinct challenge; reinforcement learning (RL) is one of the most used algorithms in robotics. The exploration-exploitation dilemma is a motivating challenge for the performance of RL algorithms. The problem is balancing exploitation and exploration, as too much exploration leads to a decrease in cumulative reward, while too much exploitation locks the agent in a local optimum. This paper proposes a new path planning method for mobile robot based on Q-learning with an improved exploration strategy. In addition, a comparative study of Boltzmann distribution and $\epsilon$-greedy politics is presented. Through simulations, the better performance of the proposed method in terms of execution time, path length, and cost function is confirmed.

Information

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable