TY - GEN
T1 - A Hybrid Model for Object Detection Based on Feature-Level Camera-Radar Fusion in Autonomous Driving
AU - Jin, Yuhao
AU - Zhu, Xiaohui
AU - Yue, Yong
AU - Ma, Jieming
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Preventing car collisions through object detection has always been a major research direction in the field of autonomous driving. Recent years, camera-based object detection technology has achieved great success. However, its performance is still insufficient under poor lighting or weather conditions. Therefore, the fusion of various sensor information has become a new trend in object detection for autonomous driving. This paper proposes a hybrid object detection model that fuses millimeter-wave radar and camera at the feature level. The model uses a traditional convolutional neural network to extract features from data collected by the radar and camera, and performs multi-scale deep fusion. Subsequently, a multi-scale deformable attention module is used to process the fused feature maps for object detection. We tested this model on the nuScenes for autonomous driving, which includes night and rainy scenes. The hybrid model achieved a mean average precision (mAP) of 47.8%, which is 1.4% higher than that of the baseline object detection model.
AB - Preventing car collisions through object detection has always been a major research direction in the field of autonomous driving. Recent years, camera-based object detection technology has achieved great success. However, its performance is still insufficient under poor lighting or weather conditions. Therefore, the fusion of various sensor information has become a new trend in object detection for autonomous driving. This paper proposes a hybrid object detection model that fuses millimeter-wave radar and camera at the feature level. The model uses a traditional convolutional neural network to extract features from data collected by the radar and camera, and performs multi-scale deep fusion. Subsequently, a multi-scale deformable attention module is used to process the fused feature maps for object detection. We tested this model on the nuScenes for autonomous driving, which includes night and rainy scenes. The hybrid model achieved a mean average precision (mAP) of 47.8%, which is 1.4% higher than that of the baseline object detection model.
KW - Autonomous Driving
KW - Deep Learning
KW - Multi-sensor fusion
KW - Object Detection
UR - http://www.scopus.com/inward/record.url?scp=85174519925&partnerID=8YFLogxK
U2 - 10.1109/ICSP58490.2023.10248746
DO - 10.1109/ICSP58490.2023.10248746
M3 - Conference Proceeding
AN - SCOPUS:85174519925
T3 - 2023 8th International Conference on Intelligent Computing and Signal Processing, ICSP 2023
SP - 897
EP - 903
BT - 2023 8th International Conference on Intelligent Computing and Signal Processing, ICSP 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on Intelligent Computing and Signal Processing, ICSP 2023
Y2 - 21 April 2023 through 23 April 2023
ER -