GeCo: Geometry-Consistent Regularization for Domain Generalized Semantic Segmentation
Abstract
Vision Foundation Models (VFMs) provide rich and transferable representations through large-scale pretraining, yet their high-capacity representations remains underutilized when adapted to downstream tasks. In Domain Generalization Semantic Segmentation (DGSS), parameter-efficient fine-tuning (PEFT) often overfits adapters to source-domain statistics and seen-class boundaries, leading to representation degradation manifested as domain bias and semantic rigidity. Existing regularization strategies alleviate this through random perturbations, but such operations disrupt the pretrained geometric structure, causing semantic drift and unstable generalization.We propose Geometry-Consistent Regularization (GeCo), which extrapolates the pretrained representation space toward the target task under structure-respected constraints, thereby preserving the inherent generalization of VFMs while enhancing their task-specific adaptation. GeCo introduces curvature-guided perturbation to modulate feature variation according to local manifold complexity of the pre-trained embedding space, enabling structure-aligned representation expansion. Complementarily, a geodesic-based regularization constrains prediction shifts along smooth, manifold-aligned trajectories, ensuring semantic continuity and stable decision behavior.Extensive experiments demonstrate that GeCo achieves superior generalization across both closed-set and open-set DGSS benchmarks.