Skip to yearly menu bar Skip to main content


Poster

Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks

Yu Zhou · Dian Zheng · Qijie Mo · Ren-Jie Lu · Kun-Yu Lin · Wei-Shi Zheng


Abstract:

In this work, we present DEcoupLEd Distillation To Erase (DELETE), a general and strong unlearning method for any class-centric tasks. To derive this, we first propose a theoretical framework to analyze the general form of unlearning loss and decompose it into forgetting and retention terms. Through the theoretical framework, we point out that a class of previous methods could be mainly formulated as a loss that implicitly optimizes the forgetting term while lacking supervision for the retention term, disturbing the distribution of pre-trained model and struggling to adequately preserve knowledge of the remaining classes.To address it, we refine the retention term using dark knowledge” and propose a mask distillation unlearning method. By applying a mask to separate forgetting logits from retention logits, our approach optimizes both the forgetting and refined retention components simultaneously, retaining knowledge of the remaining classes while ensuring thorough forgetting of the target class.Without access to the remaining data or intervention (\ie, used in some works), we achieve state-of-the-art performance across various benchmarks. What's more, DELETE is a general solution that can be applied to various downstream tasks, including face recognition, backdoor defense, and semantic segmentation with great performance.

Live content is unavailable. Log in and register to view live content