GDFA: Geometry-Driven Federated Unlearning with Directional Task Vector Alignment
Abstract
Federated Learning (FL) is a decentralized framework that not only enables collaborative training with different clients but also ensures their local data privacy. However, when deletion requests arise under privacy regulations, efficiently removing specific client data contributions from target clients can be challenging. Existing unlearning methods face significant limitations under Non-IID (Non-Independent and Identically Distributed) data distributions when attempting to unlearn specific target clients in FL. Models in sharp optimization regions can suffer catastrophic knowledge loss from minor parameter changes, exacerbating this forgetting due to conflicting parameter updates across clients caused by Non-IID data distributions in FL. Empirically, we observe that conflicting updates under Non-IID settings generate misaligned task vectors that fail to isolate target knowledge. Therefore, we exploit the loss landscape geometry in unlearning specific target clients. We demonstrate that migrating models to flat regions can enhance unlearning robustness in Non-IID FL. Correspondingly, we introduce GDFA, a framework that initially transitions the global model to a flat loss domain. Subsequently, relevant clients generate unlearning task vectors, which GDFA filters to retain only directionally consistent components. This process isolates shared knowledge attributes before precise removal through reverse vector aggregation, maximizing knowledge retention. Extensive experiments demonstrate that GDFA outperforms state-of-the-art methods in unlearning efficacy and efficiency across diverse datasets and architectures, with minimal accuracy loss on retained tasks.