TY - JOUR
T1 - Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving
T2 - A Comprehensive Review
AU - Yao, Shanliang
AU - Guan, Runwei
AU - Huang, Xiaoyu
AU - Li, Zhuoxiao
AU - Sha, Xiangyu
AU - Yue, Yong
AU - Lim, Eng Gee
AU - Seo, Hyungjoon
AU - Man, Ka Lok
AU - Zhu, Xiaohui
AU - Yue, Yutao
N1 - Publisher Copyright:
IEEE
PY - 2023/8/20
Y1 - 2023/8/20
N2 - Driven by deep learning techniques, perception technology in autonomous driving has developed rapidly in recent years, enabling vehicles to accurately detect and interpret surrounding environment for safe and efficient navigation. To achieve accurate and robust perception capabilities, autonomous vehicles are often equipped with multiple sensors, making sensor fusion a crucial part of the perception system. Among these fused sensors, radars and cameras enable a complementary and cost-effective perception of the surrounding environment regardless of lighting and weather conditions. This review aims to provide a comprehensive guideline for radar-camera fusion, particularly concentrating on perception tasks related to object detection and semantic segmentation. Based on the principles of the radar and camera sensors, we delve into the data processing process and representations, followed by an in-depth analysis and summary of radar-camera fusion datasets. In the review of methodologies in radar-camera fusion, we address interrogative questions, including “why to fuse”, “what to fuse”, “where to fuse”, “when to fuse”, and “how to fuse”, subsequently discussing various challenges and potential research directions within this domain. To ease the retrieval and comparison of datasets and fusion methods, we also provide an interactive website: https://radar-camera-fusion.github.io.
AB - Driven by deep learning techniques, perception technology in autonomous driving has developed rapidly in recent years, enabling vehicles to accurately detect and interpret surrounding environment for safe and efficient navigation. To achieve accurate and robust perception capabilities, autonomous vehicles are often equipped with multiple sensors, making sensor fusion a crucial part of the perception system. Among these fused sensors, radars and cameras enable a complementary and cost-effective perception of the surrounding environment regardless of lighting and weather conditions. This review aims to provide a comprehensive guideline for radar-camera fusion, particularly concentrating on perception tasks related to object detection and semantic segmentation. Based on the principles of the radar and camera sensors, we delve into the data processing process and representations, followed by an in-depth analysis and summary of radar-camera fusion datasets. In the review of methodologies in radar-camera fusion, we address interrogative questions, including “why to fuse”, “what to fuse”, “where to fuse”, “when to fuse”, and “how to fuse”, subsequently discussing various challenges and potential research directions within this domain. To ease the retrieval and comparison of datasets and fusion methods, we also provide an interactive website: https://radar-camera-fusion.github.io.
KW - Autonomous driving
KW - Cameras
KW - object detection
KW - Radar
KW - Radar antennas
KW - Radar cross-sections
KW - Radar imaging
KW - radar-camera fusion
KW - semantic segmentation
KW - Sensors
KW - Tensors
UR - http://www.scopus.com/inward/record.url?scp=85168687796&partnerID=8YFLogxK
U2 - 10.1109/TIV.2023.3307157
DO - 10.1109/TIV.2023.3307157
M3 - Article
AN - SCOPUS:85168687796
SN - 2379-8858
VL - 9
SP - 1
EP - 40
JO - IEEE Transactions on Intelligent Vehicles
JF - IEEE Transactions on Intelligent Vehicles
IS - 1
ER -