TY - GEN
T1 - Image Radar Point Cloud Segmentation with Segment Anything Model
AU - Du, Yu
AU - Smith, Jeremy S.
AU - Man, Ka Lok
AU - Lim, Eng Gee
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Radar point clouds are a rich source of information for various applications. However, clustering or classification of radar point clouds is challenging due to their sparsity, noise, and ambiguity. In this paper, we propose a novel approach that leverages the Segment Anything Model (SAM), a segmentation model introduced by Meta AI that can produce high-quality segment masks from 2D images, to predict 3D masks in image-radar point clouds. We extend SAM to handle 3D points indirectly, by associating point clouds with time-synchronized and calibrated image data, we first get masks from 2D images using SAM and then project the masks onto 3D points. Based on the masks, we can cluster radar point cloud and predict the remaining parameters of the objects by involving other radar point clouds attributes. This is also a convenient method to label radar point clouds for radar-only neural network development without supervision. Our approach is experimented on a self-made dataset and the results demonstrate reasonable qualitative accuracy without any further fine-tuning or training of SAM.
AB - Radar point clouds are a rich source of information for various applications. However, clustering or classification of radar point clouds is challenging due to their sparsity, noise, and ambiguity. In this paper, we propose a novel approach that leverages the Segment Anything Model (SAM), a segmentation model introduced by Meta AI that can produce high-quality segment masks from 2D images, to predict 3D masks in image-radar point clouds. We extend SAM to handle 3D points indirectly, by associating point clouds with time-synchronized and calibrated image data, we first get masks from 2D images using SAM and then project the masks onto 3D points. Based on the masks, we can cluster radar point cloud and predict the remaining parameters of the objects by involving other radar point clouds attributes. This is also a convenient method to label radar point clouds for radar-only neural network development without supervision. Our approach is experimented on a self-made dataset and the results demonstrate reasonable qualitative accuracy without any further fine-tuning or training of SAM.
KW - Image radar
KW - Segment Anything model
KW - traffic scenario
UR - http://www.scopus.com/inward/record.url?scp=85184817750&partnerID=8YFLogxK
U2 - 10.1109/ISOCC59558.2023.10396212
DO - 10.1109/ISOCC59558.2023.10396212
M3 - Conference Proceeding
AN - SCOPUS:85184817750
T3 - Proceedings - International SoC Design Conference 2023, ISOCC 2023
SP - 195
EP - 196
BT - Proceedings - International SoC Design Conference 2023, ISOCC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 20th International SoC Design Conference, ISOCC 2023
Y2 - 25 October 2023 through 28 October 2023
ER -