Skip to yearly menu bar Skip to main content


Decentralized Directed Collaboration for Personalized Federated Learning

Yingqi Liu · Yifan Shi · Qinglun Li · Baoyuan Wu · Xueqian Wang · Li Shen

Arch 4A-E Poster #354
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract: Personalized Federated Learning (PFL) is proposed to find the greatest personalized models fitting to local data distribution for each client. To avoid the central failure and communication bottleneck in the server-based FL, we concentrate on the Decentralized Personalized Federated Learning (DPFL) that performs distributed model training in a Peer-to-Peer (P2P) manner. Most personalized works in DPFL are based on undirected topologies, in which clients communicate with each other symmetrically. However, the data, computation and communication resources heterogeneity result in large variances in the personalized models, which will lead the undirected and symmetric aggregation between such models to suboptimal personalized performance and unguaranteed convergence. To address these issues, we propose a directed collaboration DPFL framework by incorporating stochastic gradient push and partial model personalized, called $\textbf{D}ecentralized$ $\textbf{Fed}erated$ $\textbf{P}artial$ $\textbf{G}radient$ $\textbf{P}ush$ ($\textbf{DFedPGP}$). It personalizes the linear classifier in the modern deep model to customize the local solution for each client and learns a consensus representation in a fully decentralized manner. Clients only share gradients with a subset of neighbors based on the directed and asymmetric topologies, which guarantees flexible choices for resource efficiency and better convergence. Theoretically, we show that the proposed DFedPGP achieves a superior convergence rate of $\mathcal{O}(\frac{1}{\sqrt{T}})$ in the general non-convex setting, and tighter connectivity among clients will speed up the convergence. Extensive experiments demonstrate that the proposed method achieves stat-of-the-art (SOTA) accuracy in both data heterogeneity and computation resources heterogeneity scenarios, demonstrating the efficiency of the directed collaboration and partial gradient push.

Live content is unavailable. Log in and register to view live content