Exemplar-Free Class Incremental Learning via Preserving Class-Discriminative Structure
Abstract
Exemplar-Free Class Incremental Learning (EFCIL) aims to enable models to learn new classes sequentially without retaining samples from previous tasks. While recent approaches leverage pre-trained models with parameter-efficient tuning to mitigate forgetting, they often overlook a crucial cause of forgetting: the collapse of the class-discriminative structure. This structure comprises two interdependent components: intra-class structure, which characterizes the shape of individual classes, and inter-class structure, which characterizes the global geometric relationships among class prototypes. We reveal that catastrophic forgetting stems from the simultaneous deterioration of both intra-class and inter-class structures. To address this, we propose a unified framework that preserves the class-discriminative structure. It preserves the intra-class structure by reshaping class means and covariances to preserve each class’s shape during migration, and maintains inter-class structure by stabilizing angular relationships between samples and old prototypes. Extensive experiments demonstrate that our framework outperforms existing leading methods on multiple EFCIL benchmarks, validating that preserving the class-discriminative structure is crucial for mitigating catastrophic forgetting.