Skip to yearly menu bar Skip to main content


Paper
in
Workshop: ReGenAI: Second Workshop on Responsible Generative AI

Dynamic watermarks in images generated by diffusion models

Yunzhuo Chen · Jordan Vice · Naveed Akhtar · Nur Haldar


Abstract:

High-fidelity text-to-image diffusion models have revolutionized visual content generation, but their widespread use raises significant copyright concerns. To address these challenges, we propose a novel multi-stage watermarking framework for diffusion models, designed to establish copyright and trace generated images back to their source. Our multi-stage watermarking technique involves embedding: (i) a fixed watermark that is localized in the diffusion model's learned noise distribution and, (ii) a human-imperceptible, dynamic watermark in generates images, leveraging a fine-tuned decoder. By leveraging the Structural Similarity Index Measure (SSIM) and cosine similarity, we adapt the watermark's shape and color to the generated content while maintaining robustness. We demonstrate that our method enables reliable source verification through watermark classification, even when the dynamic watermark is adjusted for content-specific variations. Source model verification is enabled through watermark classification. o support further research, we generate a dataset of watermarked images and introduce a methodology to evaluate the statistical impact of watermarking on generated content.Additionally, we rigorously test our framework against various attack scenarios, demonstrating its robustness and minimal impact on image quality.

Chat is not available.