Skip to yearly menu bar Skip to main content


Poster

FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion

George Cazenavette · Avneesh Sud · Thomas Leung · Ben Usman


Abstract:

Due to the high potential for abuse, the task of detecting synthetic images has lately become of great interest to the research community. Unfortunately, existing image-space detectors quickly become obsolete as new high-fidelity text-to-image models are developed at blinding speed. In this work, we propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model. We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity (e.g., DALLĀ·E 3) even when the detector is trained only on lower fidelity fake images generated via Stable Diffusion.We show that the resulting detector achieves new state-of-the-art across multiple training and evaluation setups. Moreover, we introduce a new challenging evaluation protocol that uses reverse image search to remove stylistic and thematic biases from the detector evaluation. We show that the resulting evaluation scores aligns well with detectors' in-the-wild performance, and release these datasets as pubic benchmarks for future research.

Live content is unavailable. Log in and register to view live content