SRGCD: Stability-Driven Region Growth Framework for 3D Change Detection
Abstract
With the growing accessibility of large-scale 3D point clouds from LiDAR and photogrammetric techniques, 3D change detection (3DCD) has become essential for understanding dynamic scenes. Existing methods typically formulate this as segmentation, treating each point independently for binary classification. This leads to isolated misclassified noise points inside regions. Meanwhile, feature similarity at boundaries causes boundary ambiguity. The more severe class imbalance inherent to change detection further exacerbates this issue. To address these challenges, we propose SRGCD, a Stability-Driven Region Growth Framework that redefines 3DCD as region growing rather than segmentation. Our key insight is that progressively expanding from highly confident seeds avoids pitfalls of point-wise classification while elegantly alleviating class imbalance. Specifically, we first apply strict constraints through Mutual Geometric Consistency Prior to identify minimal highly reliable unchanged seeds. From these seeds, Stability-Guided Controlled Attention modules progressively propagate stability from stable regions to neighboring uncertain points, enabling unchanged regions to grow layer-by-layer from interior cores toward boundaries. This coarse-to-fine growing process naturally forms coherent regions, avoiding isolated noise while achieving compact, well-defined boundaries through progressive expansion. Extensive experiments on the synthetic dataset Urb3DCD and the real-world dataset HKCD demonstrate that SRGCD achieves state-of-the-art performance, significantly improving interior completeness and boundary compactness compared with existing methods.