Hyperbolic Prototype Learning with Uncertainty-Aware Consistency for Continual Test-Time Segmentation
Abstract
Continual Test-Time Adaptation (CTTA) for semantic segmentation is vital for deploying vision models in dynamic environments with persistent domain shifts. Existing methods often degrade over time as self-supervised updates amplify early prediction errors. We attribute this fragility to a geometric limitation: Euclidean feature spaces, with polynomial volume growth, lead to distorted semantic representations and crowded, unstable decision boundaries. We propose HyperProtoSeg, a hyperbolic prototypical segmentation network that learns geometrically optimal class prototypes in the Poincaré ball. Leveraging the exponential expansion of hyperbolic space, it enforces large and uniform inter-class margins with low distortion, yielding well-separated and curvature-stable embeddings. For robust online adaptation, we introduce Hyperbolic Boundary Consistency Adaptation (HBCA), which partitions pixels by cross-view consistency into confident “core’’ and uncertain “boundary’’ sets. HBCA applies geodesic distance minimization for confident regions and a novel Hyperbolic Directional Consistency Loss for uncertain ones, preventing error amplification. Experiments on challenging synthetic-to-real benchmarks (Cityscapes to ACDC, IDD to IDD-AW, SHIFT) show that HyperProtoSeg + HBCA achieves an average improvement of (1.94%,4.02%,1.24%) over state-of-the-art CTTA methods under severe structural shifts.