Lane Change Decision-making through Deep Reinforcement Learning with Rule-based Constraints

Junjie Wang, Qichao Zhang, Dongbin Zhao, Yaran Chen

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

119 Citations (Scopus)

Abstract

Autonomous driving decision-making is a great challenge due to the complexity and uncertainty of the traffic environment. Combined with the rule-based constraints, a Deep Q-Network (DQN) based method is applied for autonomous driving lane change decision-making task in this study. Through the combination of high-level lateral decision-making and low-level rule-based trajectory modification, a safe and efficient lane change behavior can be achieved. With the setting of our state representation and reward function, the trained agent is able to take appropriate actions in a real-world-like simulator. The generated policy is evaluated on the simulator for 10 times, and the results demonstrate that the proposed rule-based DQN method outperforms the rule-based approach and the DQN method.

Original languageEnglish
Title of host publication2019 International Joint Conference on Neural Networks, IJCNN 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728119854
DOIs
Publication statusPublished - Jul 2019
Event2019 International Joint Conference on Neural Networks, IJCNN 2019 - Budapest, Hungary
Duration: 14 Jul 201919 Jul 2019

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2019-July

Conference

Conference2019 International Joint Conference on Neural Networks, IJCNN 2019
Country/TerritoryHungary
CityBudapest
Period14/07/1919/07/19

Keywords

  • Decision-making
  • Deep Q-Network
  • Deep Reinforcement Learning
  • Lane Change

Fingerprint

Dive into the research topics of 'Lane Change Decision-making through Deep Reinforcement Learning with Rule-based Constraints'. Together they form a unique fingerprint.

Cite this