Dual-Estimator: Decoupling Global and Local Semantic Shift for Drift Compensation in Class-Incremental Learning
Abstract
Continual Learning (CL) provides an effective paradigm for acquiring new knowledge, and the principle of learning without retaining past samples has led to exemplar-free CL that better matches practical conditions. However, a key challenge is the semantic shift, which requires reliable activation of past class representations to align with the current feature space. While drift compensation acts as the activator, it commonly assumes uniform semantic distributions and shifts, which is unrealistic for random data streams. For this, we propose the Dual-Estimator (Dual-E) to decouple global and local semantic shifts, addressing both issues of non-uniformity. Specifically, to address intra-task non-uniform semantic distributions that limit effective compensation for low-frequency semantics, Dual-E incorporates a mixture-of-experts estimator comprising multiple networks that model semantic shifts across diverse local representation spaces. For inter-task non-uniformity in semantic shifts, where uniform full-scale compensation potentially overlooks the varying degrees of semantic change across classes, Dual-E employs a low-rank estimator with an embedded low-rank network that prioritizes global semantic trends for classes exhibiting larger shifts. Dual-E leverages analytical solutions to update within a few epochs, enabling efficient plug-in integration with existing exemplar-free methods. Extensive experiments on diverse datasets demonstrate the advantages of Dual-E over state-of-the-art approaches. The code will be released.