TY - GEN
T1 - SPU-PMD
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
AU - Liu, Yanzhe
AU - Chen, Rong
AU - Li, Yushi
AU - Li, Yixi
AU - Tan, Xuehou
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Despite the success of recent upsampling approaches, generating high-resolution point sets with uniform distribution and meticulous structures is still challenging. Unlike existing methods that only take spatial information of the raw data into account, we regard point cloud upsampling as generating dense point clouds from deformable topology. Motivated by this, we present SPU-PMD, a self-supervised topological mesh deformation network, for 3D densification. As a cascaded framework, our architecture is formu-lated by a series of coarse mesh interpolator and mesh de-formers. At each stage, the mesh interpolator first produces the initial dense point clouds via mesh interpolation, which allows the model to perceive the primitive topology better. Meanwhile, the deformer infers the morphing by estimating the movements of mesh nodes and reconstructs the de-scriptive topology structure. By associating mesh deformation with feature expansion, this module progressively re-fines point clouds' surface uniformity and structural details. To demonstrate the effectiveness of the proposed method, extensive quantitative and qualitative experiments are con-ducted on synthetic and real-scanned 3D data. Also, we compare it with state-of-the-art techniques to further illus-trate the superiority of our network. The project page is: https://github.com/lyz21/spU-PMd.
AB - Despite the success of recent upsampling approaches, generating high-resolution point sets with uniform distribution and meticulous structures is still challenging. Unlike existing methods that only take spatial information of the raw data into account, we regard point cloud upsampling as generating dense point clouds from deformable topology. Motivated by this, we present SPU-PMD, a self-supervised topological mesh deformation network, for 3D densification. As a cascaded framework, our architecture is formu-lated by a series of coarse mesh interpolator and mesh de-formers. At each stage, the mesh interpolator first produces the initial dense point clouds via mesh interpolation, which allows the model to perceive the primitive topology better. Meanwhile, the deformer infers the morphing by estimating the movements of mesh nodes and reconstructs the de-scriptive topology structure. By associating mesh deformation with feature expansion, this module progressively re-fines point clouds' surface uniformity and structural details. To demonstrate the effectiveness of the proposed method, extensive quantitative and qualitative experiments are con-ducted on synthetic and real-scanned 3D data. Also, we compare it with state-of-the-art techniques to further illus-trate the superiority of our network. The project page is: https://github.com/lyz21/spU-PMd.
KW - mesh deformation
KW - mesh interpolation
KW - Point cloud upsampling
KW - self-supervised learning
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85204185647&partnerID=8YFLogxK
U2 - 10.1109/CVPR52733.2024.00496
DO - 10.1109/CVPR52733.2024.00496
M3 - Conference Proceeding
AN - SCOPUS:85204185647
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 5188
EP - 5197
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PB - IEEE Computer Society
Y2 - 16 June 2024 through 22 June 2024
ER -