TY - JOUR
T1 - MGRL
T2 - Graph neural network based inference in a Markov network with reinforcement learning for visual navigation
AU - Lu, Yi
AU - Chen, Yaran
AU - Zhao, Dongbin
AU - Li, Dong
N1 - Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2021/1/15
Y1 - 2021/1/15
N2 - Visual navigation is an essential task for indoor robots and usually uses the map as assistance to providing global information for the agent. Because the traditional maps match the environments, the map-based and map-building-based navigation methods are limited in the new environments for obtaining maps. Although the deep reinforcement learning navigation method, utilizing the non-map-based navigation technique, achieves satisfactory performance, it lacks the interpretability and the global view of the environment. Therefore, we propose a novel abstract map for the deep reinforcement learning navigation method with better global relative position information and more reasonable interpretability. The abstract map is modeled as a Markov network which is used for explicitly representing the regularity of objects arrangement, influenced by people activities in different environments. Besides, a knowledge graph is utilized to initialize the structure of the Markov network, as providing the prior structure for the model and reducing the difficulty of model learning. Then, a graph neural network is adopted for probability inference in the Markov network. Furthermore, the update of the abstract map, including the knowledge graph structure and the parameters of the graph neural network, are combined into an end-to-end learning process trained by a reinforcement learning method. Finally, experiments in the AI2THOR framework and the physical environment indicate that our algorithm greatly improves the success rate of navigation in case of new environments, thus confirming the good generalization.
AB - Visual navigation is an essential task for indoor robots and usually uses the map as assistance to providing global information for the agent. Because the traditional maps match the environments, the map-based and map-building-based navigation methods are limited in the new environments for obtaining maps. Although the deep reinforcement learning navigation method, utilizing the non-map-based navigation technique, achieves satisfactory performance, it lacks the interpretability and the global view of the environment. Therefore, we propose a novel abstract map for the deep reinforcement learning navigation method with better global relative position information and more reasonable interpretability. The abstract map is modeled as a Markov network which is used for explicitly representing the regularity of objects arrangement, influenced by people activities in different environments. Besides, a knowledge graph is utilized to initialize the structure of the Markov network, as providing the prior structure for the model and reducing the difficulty of model learning. Then, a graph neural network is adopted for probability inference in the Markov network. Furthermore, the update of the abstract map, including the knowledge graph structure and the parameters of the graph neural network, are combined into an end-to-end learning process trained by a reinforcement learning method. Finally, experiments in the AI2THOR framework and the physical environment indicate that our algorithm greatly improves the success rate of navigation in case of new environments, thus confirming the good generalization.
KW - Graph neural network
KW - Knowledge graph
KW - Markov network
KW - Probabilistic graph model
KW - Reinforcement learning
KW - Visual navigation
UR - http://www.scopus.com/inward/record.url?scp=85095687056&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2020.07.091
DO - 10.1016/j.neucom.2020.07.091
M3 - Article
AN - SCOPUS:85095687056
SN - 0925-2312
VL - 421
SP - 140
EP - 150
JO - Neurocomputing
JF - Neurocomputing
ER -