PointTPA: Test-Time Parameter Adaptation for 3D Scene Understanding
Abstract
Scene-level point cloud understanding remains challenging due to diverse geometries, imbalanced categories, and highly varied spatial layouts. Existing methods improve object-level performance but rely on static parameters during inference, limiting their adaptability to dynamic scene data. We propose Test-time Parameter Adaptation for Point Cloud Scene Perception (PointTPA), a test-time dynamic adaptation framework that constructs input-aware parameters for scene-level point clouds. PointTPA uses a Serialization-based Neighborhood Grouping (SNG) to form locally coherent patches and a Dynamic Parameter Projector (DPP) to produce patch-wise adaptive weights, enabling the backbone to adjust its behavior according to scene-specific variations while keeping parameter cost low. Integrated into PTv3, PointTPA reduces trainable parameters by over 95% and achieves competitive or superior performance to full fine-tuning. It achieves 74.9% mIoU on S3DIS and consistently surpasses existing PEFT baselines across multiple benchmarks, highlighting the efficacy of test-time dynamic parameter generation in enhancing robust 3D scene understanding. The code will be available soon.