Skip to yearly menu bar Skip to main content


Poster

LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty

Christoforos N. Spartalis · Theodoros Semertzidis · Efstratios Gavves · Petros Daras


Abstract:

This paper presents LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models.LoTUS smooths the prediction probabilities of the model, mitigating its overconfidence that stems from data memorization, up to an information-theoretic bound.We evaluate LoTUS on the Transformer and ResNet18 models, against seven baseline methods, on four public datasets. Beyond established MU benchmarks, we evaluate unlearning on a large-scale dataset (ImageNet1k) which deters retraining, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. Experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. We will share code.

Live content is unavailable. Log in and register to view live content