Representation-Steered Incremental Adapter-Tuning for Class-Incremental Learning with Pre-Trained Models
Abstract
Class-Incremental Learning (CIL) aims to develop models to continuously learn new classes without forgetting learned old ones. Recent advances combine pre-trained models with parameter-efficient fine-tuning, achieving promising results. However, these approaches typically allocate new trainable parameters for each task, causing the model size to grow linearly with task number. Moreover, they lack explicit mechanisms to structure a coherent and discriminative representation space across tasks. To address these limitations, we propose Representation-Steered Incremental Adapter Tuning (RSIAT). RSIAT maintains a single shared adapter for all tasks, eliminating parameter growth during incremental learning. In the base task, we introduce a representation-steering loss that enhances discriminative feature learning while facilitating future task adaptation. During incremental tasks, a residual autoencoder–based projector aligns feature distributions between old and new tasks, preserving representation consistency without over-constraining the shared adapter. Extensive experiments on six CIL benchmarks demonstrate that RSIAT significantly outperforms state-of-the-art methods in both performance and parameter efficiency, achieving superior stability–plasticity trade-offs with minimal trainable parameters.