FedSST: Rethinking Fair Federated Graph Learning under Structural Shift
Abstract
Federated Graph Learning (FGL) offers a privacy-preserving paradigm for collaborative training on graph data, yet significant topological heterogeneity poses a critical threat to generalization fairness, often yielding a global model dominated by a subset of clients. This introduces two critical issues: at the global level, aggregation bias disproportionately amplifies the influence of dominant clients, while at the local level, blind optimization results in inefficient and inequitable training processes. To address these challenges, we propose FedSST, an adaptive fairness framework. FedSST introduces a fair, structure-based signal to quantify client contributions, which in turn guides fair aggregation and adaptive local training. Extensive experiments across diverse cross-domain and cross-dataset settings demonstrate that FedSST enhances generalization fairness and overall model performance, outperforming various state-of-the-art methods.