Robustness Under Data Scarcity: Few-Shot Continual Adversarial Training for Evolving Threats
Abstract
Deep learning models remain highly vulnerable to evolving adversarial attacks. While existing continual adversarial training approaches often assume abundant adversarial data at each stage, real-world scenarios frequently involve limited data availability. This paper addresses the setting of Few-shot Continual Adversarial Training, where only a small number of adversarial examples are available per stage, presenting major challenges in achieving robust generalization and mitigating catastrophic forgetting. To tackle these challenges, we propose a novel continual adversarial training framework that incorporates three key components: (i) an Adversarial Margin loss that explicitly pushes clean samples away from decision boundaries to enhance feature discrimination; (ii) a Gaussian mixture model Prototype Replay strategy that synthesizes representative pseudo-features to preserve knowledge of past adversarial domains; and (iii) a Multi-Domain Balanced loss that guides updates to stabilize learning across diverse attack distributions. Extensive experiments on ImageNet-1K and CIFAR-100 demonstrate that our approach consistently outperforms state-of-the-art methods in both clean and robust accuracy across a variety of adversarial settings. The code will be released.