Towards Stable Federated Continual Test-Time Adaptation in Wild World
Abstract
Federated Learning (FL) enables collaborative model training while preserving privacy, but faces challenges with client data heterogeneity and domain shifts during deployment. Although Personalized Federated Learning (PFL) mitigates heterogeneity, it typically requires labelled data from target clients, which is an impractical assumption. Test-Time Adaptation (TTA) offers label-free adaptation, yet its direct use in a continual federated setting risks destabilizing the global model and causing catastrophic forgetting. To address this, we consider the Federated Continual Test-Time Adaptation (FedCTTA) setting, where unlabeled clients arrive sequentially, requiring online adaptation and continuous global model updates. We propose BPFedCTTA, a framework that employs Bayesian Prior-guided Adaptation (BPA) for stable local adaptation via Maximum a Posteriori estimation, and Uncertainty-Gated Single-client Aggregation (UGSA) to selectively integrate updates based on client uncertainty. This approach balances adaptation with knowledge retention, thereby mitigating forgetting. Extensive experiments on cross-domain classification and segmentation show BPFedCTTA outperforms existing FL, PFL, and TTA methods in sequential adaptation and global model improvement. The source code will be made public upon acceptance.