Beyond the Static-World: Lifelong Learning for All-in-One Medical Image Restoration
Abstract
All-in-One Medical Image Restoration (MedIR) models offer a promising path towards generalized medical imaging intelligence but face two critical spatiotemporal challenges: 1) Spatial Modality Interference, where conflicting gradients from diverse modalities (e.g., MRI, CT, PET) degrade performance; and 2) a Temporal Static-World Assumption that ignores the continual data streams in real-world clinical settings, leading to catastrophic forgetting. To address this dual challenge, we propose Resilient On-the-fly Medical Enhancement(ROME), a novel lifelong learning framework governed by a "Disentangle-Optimize-Consolidate" paradigm. ROME first resolves the foundational modality conflict via the Modality-Invariant Disentanglement via Adversarial Balancing(MIDAB) module. It establishes a strategic "adversarial balance" between a "content preservation force" and a "modality erasure force" to optimize a disentangled, unified feature manifold. Building on this stable foundation, the Adaptive Feature Consolidation(AFC) module combats forgetting. AFC dynamically locates an optimal feature consolidation point via a prediction network, enforced by a novel Diversity Loss to ensure robust continuous learning. Experiments demonstrate that ROME not only achieves SOTA performance in static settings but also exhibits superior resilience in rigorous domain-incremental benchmarks, reducing the average catastrophic performance degradation by over 10%.