Skip to yearly menu bar Skip to main content


LaRE^2: Latent Reconstruction Error Based Method for Diffusion-Generated Image Detection

Yunpeng Luo · Junlong Du · Ke Yan · Shouhong Ding

Arch 4A-E Poster #233
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: The evolution of Diffusion Models has dramatically improved image generation quality, making it increasingly difficult to differentiate between real and generated images. This development, while impressive, also raises significant privacy and security concerns. In response to this, we propose a novel Latent REconstruction error guided feature REfinement method ($\textbf{LaRE}^2$) for detecting the diffusion-generated images. We come up with the Latent Reconstruction Error (LaRE), the first reconstruction-error based feature in the latent space for generated image detection. LaRE surpasses existing methods in terms of feature extraction efficiency while preserving crucial cues required to differentiate between the real and the fake. To exploit LaRE, we propose an Error-Guided feature REfinement module (EGRE), which can refine the image feature guided by LaRE to enhance the discriminativeness of the feature. Our EGRE utilizes an align-then-refine mechanism, which effectively refines the image feature for generated-image detection from both spatial and channel perspectives. Extensive experiments on the large-scale GenImage benchmark demonstrate the superiority of our $\textbf{LaRE}^2$, which surpasses the best SoTA method by up to $\textbf{11.9\\%}$/$\textbf{12.1\\%}$ average ACC/AP across 8 different image generators. LaRE also surpasses existing methods in terms of feature extraction cost, delivering an impressive speed enhancement of $\textbf{8 times}$.

Live content is unavailable. Log in and register to view live content