Poster
Zero-shot RGB-D Point Cloud Registration with Pre-trained Large Vision Model
Haobo Jiang · Jin Xie · Jian Yang · Liang Yu · Jianmin Zheng
This paper introduces a novel task: zero-shot RGB-D point cloud registration, aimed at achieving robust 3D matching on in-the-wild data without any task-specific training. This task is both challenging and of high practical value. We present a powerful zero-shot RGB-D matching framework, {\em ZeroMatch}, which innovatively leverages the pre-trained large-scale vision model, Stable Diffusion, to address this challenge. Our core idea is to utilize the powerful zero-shot image representation of Stable Diffusion, achieved through extensive pre-training on large-scale data, to enhance point-cloud geometric descriptors for robust matching. Specifically, we combine the handcrafted geometric descriptor FPFH with Stable-Diffusion features to create point descriptors that are both locally and contextually aware, enabling reliable RGB-D registration with zero-shot capability. This approach is based on our observation that Stable-Diffusion features effectively encode discriminative global contextual cues, naturally alleviating the feature ambiguity that FPFH often encounters in scenes with repetitive patterns or low overlap. To further enhance cross-view consistency of Stable-Diffusion features for improved matching, we propose a coupled-image input mode that concatenates the source and target images into a single input, replacing the original single-image mode. This design achieves both inter-image and prompt-to-image consistency attentions, facilitating robust cross-view feature interaction and alignment. Finally, we leverage feature nearest neighbors to construct putative correspondences for hypothesize-and-verify transformation estimation. Extensive experiments on 3DMatch, ScanNet, and ScanLoNet verify the excellent zero-shot matching ability of our method.
Live content is unavailable. Log in and register to view live content