FedCART: Tackling Long-Tailed Distributions in Federated Adversarial Training via Classifier Refinement
Abstract
Growing privacy and security demands in the real world have spurred interest in adversarially robust Federated Learning (FL). While Adversarial Training (AT) is a well-established defense in centralized learning, its extension to the federated setting, known as Federated Adversarial Training (FAT), faces significant challenges due to data heterogeneity across clients. Existing FAT methods have made significant contributions, but they typically assume a balanced global data distribution, an assumption that rarely holds true in practice due to the prevalence of long-tailed distributions. This work first identifies and diagnoses the severe performance degradation of FAT under long-tailed data, attributing it to skewed feature representations and impaired classifier discriminability. To address this, we propose FedCART, a novel FAT framework that decouples the model into a shared feature extractor and a dual-classifier structure. On the client side, a representation-alignment loss enhances adversarial robustness, while gradient-based class prototypes are extracted for classifier calibration. On the server side, models and prototype sets are aggregated to synthesize balanced virtual features, enabling the re-training of an auxiliary classifier to mitigate long-tailed bias. Extensive experiments demonstrate that FedCART significantly improves both accuracy and robustness, outperforming state-of-the-art FAT methods. To the best of our knowledge, this is the first work to systematically investigate and address FAT under long-tailed distributions, representing a significant step toward practical adversarial robustness in FL. Our code will be publicly available upon acceptance.