CLIPoint3D: Language-Grounded Few-Shot Unsupervised 3D Point Cloud Domain Adaptation
Mainak Singha ⋅ Sarthak Mehrotra ⋅ Paolo Casari ⋅ Subhasis Chaudhuri ⋅ Elisa Ricci ⋅ Biplab Banerjee
Abstract
Recent vision-language models (VLMs) such as CLIP demonstrate impressive cross-modal reasoning, extending beyond images to 3D perception. Yet, these models remain fragile under domain shifts, especially when adapting from synthetic to real-world point clouds. Conventional 3D domain adaptation approaches rely on heavy trainable encoders, yielding strong accuracy but at the cost of efficiency. We introduce $\textbf{CLIPoint3D}$, the first framework for $\textit{few-shot unsupervised 3D point cloud domain adaptation}$ built upon CLIP. Our approach projects 3D samples into multiple depth maps and exploits the frozen CLIP backbone, refined through a knowledge-driven prompt tuning scheme that integrates high-level language priors with geometric cues from a lightweight 3D encoder. To adapt task-specific features effectively, we apply parameter-efficient fine-tuning to CLIP's encoders and design an entropy-guided view sampling strategy for selecting confident projections. Furthermore, an optimal transport-based alignment loss and an uncertainty-aware prototype alignment loss collaboratively bridge source-target distribution gaps while maintaining class separability. Extensive experiments on PointDA-10 and GraspNetPC-10 benchmarks show that $\textit{CLIPoint3D}$ achieves consistent $\textit{3-16}$% accuracy gains over both CLIP-based and conventional encoder-based baselines.
Successful Page Load