Poster
Harnessing Global-local Collaborative Adversarial Perturbation for Anti-Customization
long xu · Jiakai Wang · Haojie Hao · Haotong Qin · Jiejie Zhao · Xianglong Liu
Though achieving significant success in personalized image synthesis, Latent Diffusion models (LDMs) pose substantial social risks caused by unauthorized misuse (e.g., face theft). To counter these threats, the Anti-Customization (AC) method that exploits adversarial perturbations has been proposed.Unfortunately, existing AC methods show insufficient defense ability due to the ignorance to hierarchical characteristics, i.e., global feature correlations and local facial attribute, leading to weak resistance to concept transfer and semantic theft from customization methods. To address this problem, we are motivated to propose a Global-local collaborated Anti-Customization (GoodAC) framework to generate powerful adversarial perturbations by disturbing both feature correlations and facial attributes.For enhancing the ability to resist concept transfer, we disrupt the spatial correlation of perceptual features that form the basis of model generation at a global level, thereby creating highly concept-transfer-resistant adversarial camouflage.To improve the ability to resist semantic theft, leveraging the fact that facial attributes are personalized, we designed a personalized and precise facial attribute distortion strategy locally, focusing the attack on the individual's image structure to generate strong camouflage.Extensive experiments on various LDMs, including Dreambooth, LoRA, and textual inversion, have strongly demonstrated that our GoodAC outperforms other state-of-the-art approaches by large margins, e.g., over 50\% improvements on ISM.
Live content is unavailable. Log in and register to view live content