Poster
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning
Xuan Liu · Xiaobin Chang
[
Abstract
]
Abstract:
In continual learning (CL), catastrophic forgetting often arises due to feature drift.This challenge is particularly prominent in the exemplar-free continual learning (EFCL) setting, where samples from previous tasks cannot be retained.Therefore, the model struggles to maintain prior knowledge, leading to a more significant performance drop on an older task.To ensure consistent representations across tasks, it is vital to mitigate feature drift.Some EFCL methods aim to identify feature spaces that minimize the impact on previous tasks while accommodating new ones.However, they rely on static features or outdated statistics from old tasks, which prevents them from capturing the dynamic evolution of the feature space in CL, leading to performance degradation.In this paper, we introduce the Drift-Resistant Space (DRS), which effectively handles feature drifts without requiring explicit feature modeling or the storage of previous tasks.A novel parameter-efficient fine-tuning method called Low-Rank Adaptation Subtraction (LoRA−) is proposed to develop the DRS.This method subtracts the LoRA weights of old tasks from the initial pre-trained weight before processing new task data to establish the DRS for model training.Therefore, LoRA− enhances stability, improves efficiency, and simplifies implementation.Furthermore, stabilizing feature drifts allows for better plasticity by learning with a triplet loss.Extensive experiments across multiple datasets show that our method consistently achieves state-of-the-art results, particularly for long sequences of learning tasks.
Live content is unavailable. Log in and register to view live content