Reinforcement Learning and Deep Learning Based Lateral Control for Autonomous Driving [Application Notes]

Dong Li*, Dongbin Zhao, Qichao Zhang, Yaran Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

143 Citations (Scopus)

Abstract

This paper investigates the visionbased autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input.

Original languageEnglish
Article number8686348
Pages (from-to)83-98
Number of pages16
JournalIEEE Computational Intelligence Magazine
Volume14
Issue number2
DOIs
Publication statusPublished - May 2019

Fingerprint

Dive into the research topics of 'Reinforcement Learning and Deep Learning Based Lateral Control for Autonomous Driving [Application Notes]'. Together they form a unique fingerprint.

Cite this