Poster
Invisible Backdoor Attack against Self-supervised Learning
Hanrong Zhang · Zhenting Wang · Boheng Li · Fulin Lin · Tingxu Han · Mingyu Jin · Chenlu Zhan · Mengnan Du · Hongwei Wang · Shiqing Ma
Self-supervised learning (SSL) models are vulnerable to backdoor attacks. Existing backdoor attacks that are effective in SSL often involve noticeable triggers, like colored patches or visible noise, which are vulnerable to human inspection. This paper proposes an imperceptible and effective backdoor attack against self-supervised models. We first find that existing imperceptible triggers designed for supervised learning are less effective in compromising self-supervised models. We then identify this ineffectiveness is attributed to the overlap in distributions between the backdoor and augmented samples used in SSL. Building on this insight, we design an attack using optimized triggers disentangled with the augmented transformation in the SSL, while remaining imperceptible to human vision. Experiments on five datasets and six SSL algorithms demonstrate our attack is highly effective and stealthy. It also has strong resistance to existing backdoor defenses.
Live content is unavailable. Log in and register to view live content