Skip to yearly menu bar Skip to main content


Poster

Traceable Federated Continual Learning

Qiang Wang · Bingyan Liu · Yawen Li


Abstract:

Federated continual learning (FCL) is a typical mechanism to achieve collaborative model training among clients that own dynamic data. While traditional FCL methods have been proved effective, they do not consider the task repeatability and fail to achieve good performance under this practical scenario. In this paper, we propose a new paradigm, namely \textit{Traceable Federated Continual Learning (TFCL)}, aiming to cope with repetitive tasks by tracing and augmenting them. Following the new paradigm, we develop \textbf{TagFed}, a framework that enables accurate and effective \textbf{T}racing, \textbf{a}u\textbf{g}mentation, and \textbf{Fed}eration for TFCL. The key idea is to decompose the whole model into a series of marked sub-models for optimizing each client task, before conducting group-wise knowledge aggregation, such that the repetitive tasks can be located precisely and federated selectively for improved performance. Extensive experiments on our constructed benchmark demonstrate the effectiveness and efficiency of the proposed framework \footnote{ Source code will be released after notification.}.

Live content is unavailable. Log in and register to view live content