DGS: Dual Gradient and Semantic-Shift Guided Low-Rank Adaptation for Class Incremental Learning
Abstract
In Class-Incremental Learning (CIL), parameter efficient fine-tuning applied to Pre-trained Models (PTMs) remain vulnerable to catastrophic forgetting as they adapt to new tasks. The prevalent strategy to mitigate catastrophic forgetting is to constrain gradients within the orthogonal subspaces of past tasks, while rigid gradient constraints hinder plasticity. In this paper, we propose a novel CIL framework, Dual Gradient and Semantic-Shift Guided Low-Rank Adaptation (DGS), that balances stability and plasticity via gradient fusion and maintains representation consistency through classifier and patch-token alignment. Specifically, our method introduces the Dual Gradient update strategy that first derives a base subspace projection from the PTMs and then fuses task-specific LoRA gradients with their aligned counterparts through interpolated combination. This design promotes knowledge retention without sacrificing task-specific expressiveness. Furthermore, we employ a Classifier Alignment mechanism with Semantic shift estimation which is based on the calibrated prototype statistics to mitigate classifier shift, and introduce a novel Patch-level Alignment loss to preserve feature consistency across tasks. Extensive experiments on six standard benchmarks demonstrate that our approach consistently outperforms existing CIL methods, highlighting its effectiveness and generalization capability in continual learning scenarios.