UniDef: Universal Defense Against Unauthorized Image Manipulation
Abstract
Image protection against unauthorized diffusion-based editing has achieved encouraging progress. However, existing methods face two critical limitations: (1) They only disturb the denoising direction at local step, resulting in generated images still retaining original or edited semantics. (2) Their optimization rely heavily on model-specific gradient, limiting transferable protection across different models and tasks. To address these challenges, we propose a Universal Defense (UniDef) framework for protection against unauthorized image manipulation. Specifically, we first discover that different variants of diffusion models tend to pursue a consistent distribution objective during complete denoising process. Based on this discovery, we design Consistent Distribution Deviation strategy to perturb the diffusion direction at the global denoising, thereby disrupting the overall image semantics. Furthermore, to mitigate model dependency, we devise a Finite Difference-based Jacobian Estimation module to approximate the global gradient in a model-agnostic manner, thus ensuring more transferable protection. Benefiting from the above designs, our method yields generated images no longer preserve the image semantic while possessing excellent generalization. Extensive experiments demonstrate that our UniDef not only outperforms existing methods, but also exhibits universal protection across diverse models and tasks.