DK-DDIL: Adaptive Knowledge Retention for Dynamic Domain-Incremental Learning in Medical Imaging
Abstract
Large-scale foundation models pretrained on massive datasets have demonstrated strong generalization capabilities in medical image analysis. However, they are typically trained on static datasets and struggle to cope with the continuously evolving nature of clinical data, where new imaging devices, institutions, and disease subtypes constantly emerge. While domain-incremental learning (DIL) provides a solution for sequential adaptation without revisiting historical data, existing methods typically assume fixed label spaces and limited domain heterogeneity, restricting their applicability to real-world clinical scenarios. To address these challenges, we propose DK-DDIL, a rehearsal-free framework for dynamic DIL that integrates two synergistic modules: a Dynamic Adaptation Module (DAM) employing dynamic rank selection and adaptive regularization to flexibly allocate model capacity under domain shifts, and a Knowledge Inheritance and Refinement (KIR) module that stabilizes cross-domain knowledge transfer through selective adapter fusion and prototype-level contrastive refinement. Experiments on the Skin Pathology Diagnosis dataset, the Cyst-X 3D MRI cohort, and the OfficeHome benchmark demonstrate that DK-DDIL consistently outperforms state-of-the-art DIL approaches, highlighting its effectiveness and versatility across dynamic 2D medical, 3D medical, and natural image domains.