Skip to yearly menu bar Skip to main content


Poster

A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment

Wang Xuan · Xitong Gao · Dongping Liao · Tianrui Qin · Yu-liang Lu · Cheng-Zhong Xu


Abstract: In the age of pervasive machine learning applications, protecting digital content from unauthorized use has become a pressing concern. Unlearnable examples (UEs), i.e., data modified with imperceptible perturbations to inhibit model training while preserving human usability, have emerged as a promising approach. However, existing UE methods assume unauthorized trainers have extensive exposure to UEs or that models are trained from scratch, which may not hold in practical scenarios, This paper investigates the effectiveness of UEs under the few-shot learning paradigm, pitching it against visual prompt learning (VPL) models that leverage pretrained vision-language models (VLMs), like CLIP, capable of generalizing to new classes with minimal data. To address this, we introduce an adaptive UE framework to generate unlearnable examples that specifically target the VPL process. In addition, we propose a novel UE countermeasure, A3, with cross-modal adversarial feature alignment, specifically designed to circumvent UEs under few-shot VPL. Experimental evaluations on 7 datasets show that A3 outperforms existing VPL methods, achieving up to 33% higher performance in learning from UEs. For example, in the scenario involving -bounded EM perturbations, A3 has an average harmonic mean accuracy across 7 datasets of 82.43%, compared to CoCoOp's baseline of 65.47%. Our findings highlight the limitations of existing UEs against VPL and lay the foundation for future data protection mechanisms.

Live content is unavailable. Log in and register to view live content