Poster
Fingerprinting Denoising Diffusion Probabilistic Models
Huan Teng · Yuhui Quan · Chengyu Wang · Jun Huang · Hui Ji
Diffusion models, especially Denoising Diffusion Probabilistic Models (DDPMs) and their variants, are prevalent tools in generative AI, making the protection of their Intellectual Property (IP) rights increasingly important. Most existing methods on IP right protection for DDPMs are invasive, e.g., watermarking methods, which alter model parameters and raise concerns about performance degradation, also with requirement for extra computational resources for retraining or fine-tuning. In this paper, we propose the first non-invasive fingerprinting scheme for DDPMs, requiring no parameter changes or fine-tuning, and ensuring that the generation quality of DDPMs remains intact. We introduce a discriminative and robust fingerprint latent space, based on the well-designed crossing route of samples that span the performance border zone of DDPMs, with only black-box access required for the diffusion denoiser in the ownership verification stage. Extensive experiments demonstrate that our fingerprinting approach enjoys both robustness against the often-seen attacks and distinctiveness on various DDPMs, providing an alternative for protecting DDPMs' IP rights without compromising their performance or integrity.
Live content is unavailable. Log in and register to view live content