Skip to yearly menu bar Skip to main content


Poster

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

Jiangpeng He · Zhihao Duan · Fengqing Zhu


Abstract:

Class-Incremental Learning (CIL) aims to learn new classes sequentially while retaining the knowledge of previously learned classes. Recently, pre-trained models (PTMs) combined with parameter-efficient fine-tuning (PEFT) have shown remarkable performance in rehearsal-free CIL without requiring exemplars from previous tasks. However, existing adapter-based methods, which incorporate lightweight learnable modules into PTMs for CIL, create new adapters for each new task, leading to both parameter redundancy and failure to leverage shared knowledge across tasks. In this work, we propose ContinuaL Low-Rank Adaptation (CL-LoRA), which introduces a novel dual-adapter architecture combining task-shared adapters to learn cross-task knowledge and task-specific adapters to capture the unique feature of each new task. Specifically, the shared adapters utilize random orthogonal matrices and leverage knowledge distillation with gradient reassignment to preserve essential shared knowledge. In addition, we introduce learnable block-wise weights for task-specific adapters, which mitigates inter-task interference while maintaining the model's plasticity. Through comprehensive experiments across multiple benchmark datasets, we demonstrate that CL-LoRA consistently outperforms state-of-the-art methods while using fewer trainable parameters, establishing a more efficient and scalable paradigm for continual learning with pre-trained models.

Live content is unavailable. Log in and register to view live content