AdaPrior: Bayesian-Inspired Adaptive Prior Correction for Long-Tailed Continual Learning
Abstract
Long-Tail Class Incremental Learning (LTCIL) combines two fundamental challenges: \textit{catastrophic forgetting} of past tasks and \textit{severe class imbalance}. Existing approaches mitigate one challenge at a time, through rehearsal, reweighting, or classifier alignment, but they typically assume \emph{static priors} and rely on multi-stage training. In contrast, we propose \textbf{AdaPrior}, a simple Bayesian framework that treats LTCIL as a problem of \emph{dynamic prior misalignment}. Our key idea is to estimate model-induced priors online via an exponential moving average and use them for (i) debiasing during training (\textbf{AdaPrior Loss}), and (ii) lightweight post-hoc correction at inference. The combined approach unifies loss-level and inference-level debiasing without additional stages or heavy computation. We provide theoretical analysis showing that AdaPrior’s prior estimator converges to the true model prior and that its logit adjustment yields well calibrated posteriors under mild assumptions. Extensive experiments on CIFAR100-LT, Food-101-LT, ImageNet-LT-subset, and iNaturalist18-subset demonstrate consistent gains over recent LTCIL baselines. Beyond accuracy, AdaPrior improves calibration, and forgetting curves, making it a practical and scalable solution for long-tail continual learning.