FedMOP: Achieving Enhanced Privacy and Performance in Federated Learning via Momentum Orthogonal Projection
Abstract
Federated Learning (FL) faces a fundamental dilemma: existing defenses against gradient leakage attacks (GLAs) invariably sacrifice model performance for privacy protection through noise injection or gradient clip. We introduce Federated Learning with Momentum-Based Orthogonal Projection (FedMOP), a method that simultaneously achieves strong privacy guarantees and superior model performance. The key insight is to leverage initialization-based offset mechanisms that operate on orthogonal dimensions. For performance enhancement, FedMOP employs gradient orthogonal projection to counteract local drift, effectively offsetting each client's round-training initial model using global statistical context. For privacy protection, it introduces momentum-based trajectory offset hiding, which makes the offset vector inherently unrecoverable by constructing information barriers through private initialization and randomized evolution. These two mechanisms are synergistic rather than antagonistic. Theoretically, we prove convergence preservation and characterize the computationally infeasible inverse problem faced by attackers. Extensive experiments on CIFAR-10/100 and Tiny-ImageNet demonstrate that FedMOP not only defends effectively against state-of-the-art GLAs but also surpasses existing FL methods in both accuracy and convergence speed, validating its ability to jointly enhance privacy and performance in FL.