TY - JOUR
T1 - Expected-mean gamma-incremental reinforcement learning algorithm for robot path planning
AU - Tan, Chee Sheng
AU - Mohd-Mokhtar, Rosmiwati
AU - Arshad, Mohd Rizal
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/9/1
Y1 - 2024/9/1
N2 - Recently, researchers have been extensively exploring the immense potential of Q-Star. However, the available resources lack comprehensive information on this topic. Despite this, a Q-table, an uncomplicated lookup table containing actions and states, is often seen as a mere data structure for tracking. This overlooks the vast amount of knowledge that can be derived from it through visualization. The Q-learning algorithm utilizes this table to update values and determine the highest anticipated rewards for actions in each state. However, instead of relying solely on complex reward functions for algorithm development, leveraging the existing knowledge within the Q-table would be highly beneficial. Incorporating this valuable information into the algorithmic framework can minimize the need to develop intricate reward functions. This paper proposes an expected-mean gamma-incremental Q approach to tackle the challenges of convergence speed in an uninformed search reinforcement learning (RL) algorithm and the issue of path optimality in path planning problems. The gamma-incremental RL method revolves around adjusting the weight of the future value by considering the level of exploration. It enables the robot to receive preference feedback, either near-term reward or long-term reward, based on the frequency of the visited state. Meanwhile, the expected-mean technique uses the information of the robot's turning actions to update the Q-target. By consistently incorporating valuable insights from the Q-table, the algorithm can gradually enhance its understanding of the available information, resulting in more efficient decision-making. The experiment results indicate that the proposed algorithm accelerates the convergence rate, outperforming the baseline Q-learning by up to 2 times. It addresses the challenge of robot path planning by prioritizing promising solutions, resulting in near-optimal outcomes with higher total rewards and enhanced learning stability.
AB - Recently, researchers have been extensively exploring the immense potential of Q-Star. However, the available resources lack comprehensive information on this topic. Despite this, a Q-table, an uncomplicated lookup table containing actions and states, is often seen as a mere data structure for tracking. This overlooks the vast amount of knowledge that can be derived from it through visualization. The Q-learning algorithm utilizes this table to update values and determine the highest anticipated rewards for actions in each state. However, instead of relying solely on complex reward functions for algorithm development, leveraging the existing knowledge within the Q-table would be highly beneficial. Incorporating this valuable information into the algorithmic framework can minimize the need to develop intricate reward functions. This paper proposes an expected-mean gamma-incremental Q approach to tackle the challenges of convergence speed in an uninformed search reinforcement learning (RL) algorithm and the issue of path optimality in path planning problems. The gamma-incremental RL method revolves around adjusting the weight of the future value by considering the level of exploration. It enables the robot to receive preference feedback, either near-term reward or long-term reward, based on the frequency of the visited state. Meanwhile, the expected-mean technique uses the information of the robot's turning actions to update the Q-target. By consistently incorporating valuable insights from the Q-table, the algorithm can gradually enhance its understanding of the available information, resulting in more efficient decision-making. The experiment results indicate that the proposed algorithm accelerates the convergence rate, outperforming the baseline Q-learning by up to 2 times. It addresses the challenge of robot path planning by prioritizing promising solutions, resulting in near-optimal outcomes with higher total rewards and enhanced learning stability.
KW - Multi-objective
KW - Path planning
KW - Q-Star
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85185838225&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2024.123539
DO - 10.1016/j.eswa.2024.123539
M3 - Article
AN - SCOPUS:85185838225
SN - 0957-4174
VL - 249
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 123539
ER -