Skip to yearly menu bar Skip to main content


Poster

Detecting Backdoors in Pre-Trained Encoders

Shiwei Feng · Guanhong Tao · Siyuan Cheng · Guangyu Shen · Xiangzhe Xu · Yingqi Liu · Kaiyuan Zhang · Shiqing Ma · Xiangyu Zhang

West Building Exhibit Halls ABC 381

Abstract:

Self-supervised learning in computer vision trains on unlabeled data, such as images or (image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input data. Emerging backdoor attacks towards encoders expose crucial vulnerabilities of self-supervised learning, since downstream classifiers (even further trained on clean data) may inherit backdoor behaviors from encoders. Existing backdoor detection methods mainly focus on supervised learning settings and cannot handle pre-trained encoders especially when input labels are not available. In this paper, we propose DECREE, the first backdoor detection approach for pre-trained encoders, requiring neither classifier headers nor input labels. We evaluate DECREE on over 400 encoders trojaned under 3 paradigms. We show the effectiveness of our method on image encoders pre-trained on ImageNet and OpenAI’s CLIP 400 million image-text pairs. Our method consistently has a high detection accuracy even if we have only limited or no access to the pre-training dataset.

Chat is not available.