SAME: Sparse and Anchored Model Editing for Heterogeneous Incremental Learning under Limited Data
Abstract
Existing Incremental Learning (IL) methods are primarily evaluated under either a single-domain class-incremental setting, or a multi-domain task-incremental setting with known task identifiers. However, these assumptions often fail to hold in real-world applications. To bridge this gap, we introduce Heterogeneous Incremental Learning (HIL), a new setting for evaluating IL methods under realistic and challenging conditions, where task boundaries are ambiguous or unknown, class distributions shift dynamically across environments, and training data is limited. Model editing is inherently well-suited for this challenging HIL, as it allows for the efficient integration of new knowledge while preserving model capabilities. Thus, we propose a novel Sparse and Anchored Model Editing (SAME) for addressing HIL. Specifically, SAME sparsely and selectively updates task-relevant model parameters to extract compact, task-specific key–value knowledge pairs from limited data. Using these task knowledge pairs, the model performs knowledge injection for new tasks under double-anchor constraints. The knowledge anchor aligns the updated and original model features, while the parameter anchor constrains parameter magnitudes, ensuring stable and consistent knowledge injection. Our method can efficiently solve HIL using only a few labeled examples, without introducing additional model parameters. Extensive experiments on 11 diverse visual-language datasets across 22 sequential tasks show that our method outperforms existing continual learning approaches by 6.8% in average accuracy, while retaining 95.8% of the oracle model performance, demonstrating strong stability and cross-domain generalization.