Skip to yearly menu bar Skip to main content


BrainWash: A Poisoning Attack to Forget in Continual Learning

Ali Abbasi · Parsa Nooralinejad · Hamed Pirsiavash · Soheil Kolouri

Arch 4A-E Poster #440
[ ] [ Project Page ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Continual learning has garnered substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning. However, a largely unexplored aspect of this paradigm is its vulnerability to adversarial attacks, particularly those designed to induce forgetting. In this paper, we introduce "BrainWash," a novel poisoning attack specifically tailored to impose forgetting on a continual learner. By adding the BrainWash noise to various baselines, we demonstrate that a trained continual learner can be induced to forget its previously learned tasks catastrophically. A key feature of our approach is that the attacker does not require access to the data from previous tasks and only needs the model's current parameters and the data for the next task that the continual learner will undertake. Our extensive experiments underscore the efficacy of BrainWash, showcasing a degradation in performance across various regularization-based continual learning methods.

Live content is unavailable. Log in and register to view live content