Domain Sensitive Federated Learning with Fisher-Informed Pruning
Abstract
Federated Learning (FL) serves as a prominent distributed machine learning paradigm, enabling clients to collaboratively train a shared model. However, clients generally possess data from multiple domains, posing significant challenges to model efficiency and generalization. In this paper, we propose \texttt{FedFIP}, a domain-sensitive federated pruning framework that preserves domain-invariant structures and domain-specific representations. First, we design the Domain-Sensitive Fisher Pruning (DSFP) module to estimate channel importance per domain via Fisher information, and upload it to the server to obtain a globally shared pruning mask. Due to domain heterogeneity, each client reuses its Fisher information to selectively reactivate domain-specific channels, yielding personalized sparse models that remain structurally aligned yet adapt to local heterogeneity. To further enhance performance, we adopt a Domain-Sensitive Regularization (DSR) module, in which the server builds domain prototypes from important signals and broadcasts them back. Guided by the prototypes, we introduce a structure-contrastive loss to strengthen intra-domain consistency and inter-domain discriminability. Finally, we propose a structure-aware aggregation algorithm to fuse heterogeneous personalized architectures into a domain-generalized global model. Extensive experiments on multi-domain benchmarks demonstrate that \texttt{FedFIP} surpasses state-of-the-art FL baselines while substantially shrinking model size.