Skip to yearly menu bar Skip to main content


Poster

FedCALM: Conflict-aware Layer-wise Mitigation for Selective Aggregation in Deeper Personalized Federated Learning

Hao Zheng · Zhigang Hu · Boyu Wang · Liu Yang · Meiguang Zheng · Aikun Xu


Abstract:

Server aggregation conflict is a key challenge in personalized federated learning (PFL). While existing PFL methods have achieved significant progress with shallow base models (e.g., four-layer CNNs), they often overlook the negative impacts of deeper base models on personalization mechanisms. In this paper, we identify the phenomenon of deep model degradation in PFL, where as base model depth increases, the model becomes more sensitive to local client data distributions, thereby exacerbating server aggregation conflicts and ultimately reducing overall model performance. Moreover, we show that these conflicts manifest in insufficient global average updates and mutual constraints between clients. Motivated by our analysis, we proposed a two-stage conflict-aware layer-wise mitigation algorithm, which first constructs a conflict-free global update to alleviate negative conflicts, and then alleviates the conflicts between clients through a conflict-aware strategy.Notably, our method naturally leads to a selective mechanism that balances the tradeoff between clients involved in aggregation and the tolerance for conflicts. Consequently, it can boost the positive contribution to the clients even with the greatest conflicts with the global update.Extensive experiments across multiple datasets and deeper base models demonstrate that FedCALM outperforms four state-of-the-art (SOTA) methods by up to 9.88\% and seamlessly integrates into existing PFL methods with performance improvements of up to 9.01\%. Moreover, FedCALM achieves comparable or even better communication and computational efficiency than other SOTA methods.

Live content is unavailable. Log in and register to view live content