Expected-mean gamma-incremental reinforcement learning algorithm for robot path planning

Chee Sheng Tan, Rosmiwati Mohd-Mokhtar*, Mohd Rizal Arshad

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

Recently, researchers have been extensively exploring the immense potential of Q-Star. However, the available resources lack comprehensive information on this topic. Despite this, a Q-table, an uncomplicated lookup table containing actions and states, is often seen as a mere data structure for tracking. This overlooks the vast amount of knowledge that can be derived from it through visualization. The Q-learning algorithm utilizes this table to update values and determine the highest anticipated rewards for actions in each state. However, instead of relying solely on complex reward functions for algorithm development, leveraging the existing knowledge within the Q-table would be highly beneficial. Incorporating this valuable information into the algorithmic framework can minimize the need to develop intricate reward functions. This paper proposes an expected-mean gamma-incremental Q approach to tackle the challenges of convergence speed in an uninformed search reinforcement learning (RL) algorithm and the issue of path optimality in path planning problems. The gamma-incremental RL method revolves around adjusting the weight of the future value by considering the level of exploration. It enables the robot to receive preference feedback, either near-term reward or long-term reward, based on the frequency of the visited state. Meanwhile, the expected-mean technique uses the information of the robot's turning actions to update the Q-target. By consistently incorporating valuable insights from the Q-table, the algorithm can gradually enhance its understanding of the available information, resulting in more efficient decision-making. The experiment results indicate that the proposed algorithm accelerates the convergence rate, outperforming the baseline Q-learning by up to 2 times. It addresses the challenge of robot path planning by prioritizing promising solutions, resulting in near-optimal outcomes with higher total rewards and enhanced learning stability.

Original languageEnglish
Article number123539
JournalExpert Systems with Applications
Volume249
DOIs
Publication statusPublished - 1 Sept 2024
Externally publishedYes

Keywords

  • Multi-objective
  • Path planning
  • Q-Star
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Expected-mean gamma-incremental reinforcement learning algorithm for robot path planning'. Together they form a unique fingerprint.

Cite this