Taming Noise-Induced Prototype Degradation for Privacy-Preserving Personalized Federated Fine-Tuning
Yuhua Wang ⋅ Qinnan Zhang ⋅ Xiaodong Li ⋅ Huan Zhang ⋅ Yifan Sun ⋅ Wangjie Qiu ⋅ Hainan Zhang ⋅ Yongxin Tong ⋅ Zhiming Zheng
Abstract
Prototype-based Personalized Federated Learning (ProtoPFL) enables efficient cross-domain adaptation by communicating compact class prototypes, but directly sharing prototypes raises privacy risks. A common defense involves per-example $\ell_2$ clipping before prototype computation to limit sensitivity, followed by the addition of isotropic Gaussian noise during upload to enforce Local Differential Privacy (LDP). However, this Isotropic Gaussian Prototype Perturbation (IGPP) often over-perturbs key discriminative dimensions and struggles to balance the clipping threshold with representation fidelity. We propose VPDR, a client-side privacy plug-in that can be seamlessly integrated into existing ProtoPFL frameworks. Motivated by the statistical prior that dimension-wise class variance reflects discriminability, we introduce Variance-adaptive Prototype Perturbation (VPP), which uses groupwise calibration to apply less noise to discriminative subspaces, preserving semantic separability while ensuring privacy. We further design Distillation-guided Clipping Regularization (DCR), which enables feature norms to adaptively concentrate near the predefined clipping threshold while maintaining prediction consistency. Theoretical analysis shows that our groupwise noise provides privacy guarantees no weaker than those of the isotropic mechanism under the same privacy constraints. Extensive experiments on multiple cross-domain benchmarks demonstrate that VPDR achieves a superior privacy-utility trade-off, outperforming IGPP in personalized federated fine-tuning while maintaining strong privacy protection under realistic attack scenarios.
Successful Page Load