Degradation-Consistent Test-Time Adaptation for All-in-One Image Restoration
Abstract
All-in-one image restoration (AiOIR) methods have made remarkable progress in handling diverse degradations. However, their performance often deteriorates when the test distribution deviates from the training distribution. Exploring test-time adaptation for AiOIR is therefore crucial. To adapt a pre-trained AiOIR model to unseen degradation distributions without access to source data or retraining, two key challenges must be addressed: designing reliable pseudo-supervision and stabilizing adaptation. Observing that multiple degraded versions of the same scene should map to a consistent clean image, we propose Degradation-Consistent Test-Time Adaptation (DCTTA). DCTTA comprises three core components: (1) test-time redegradation generation, which leverages a diffusion-based generator to construct pseudo degraded–clean pairs for distribution alignment; (2) degradation-guided image restoration, which enforces domain adaptation via self-supervised consistency loss; and (3) test-time important parameter selection, which selectively updates degradation-sensitive parameters to ensure stable adaptation while preserving pre-trained knowledge. Extensive experiments across multiple tasks and challenging domain shifts demonstrate that DCTTA consistently outperforms state-of-the-art AiOIR baselines, achieving up to +4.57 dB PSNR improvement on the Rain100H dataset.