A Sanity Check for Multi-In-Domain Face Forgery Detection in the Real World
Abstract
Existing methods for deepfake detection aim to develop generalizable detectors. Although ``generalizable'' could be the ultimate target once and for all, with limited training forgeries and domains, it appears idealistic to expect generalization that covers entirely unseen variations, especially given the diversity, advancement, and vast volume of real-world deepfakes. Therefore, introducing large-scale multi-domain data for training can be feasible and important for real-world applications.However, within such a multi-domain scenario, the differences between multiple domains, rather than the subtle real/fake distinctions, dominate the feature space. As a result, despite detectors being able to \textbf{relatively} separate real and fake within each domain (i.e., high AUC), they struggle with single-image real/fake judgments in domain-unspecified conditions (i.e., low ACC).In this paper, we first define a new research paradigm named \textbf{Multi-In-Domain Face Forgery Detection (MID-FFD)}, which includes sufficient volumes of real-fake domains for training. Then, the detector should provide definitive real-fake judgments to the domain-unspecified inputs, which simulate the frame-by-frame independent detection scenario in the real world. Meanwhile, to address the domain-dominant issue, we propose a two-stage, model-agnostic framework termed DevDet (\underline{Dev}eloper for \underline{Det}ector) to amplify real/fake differences and make them dominant in the feature space. DevDet consists of a Face Forgery Developer (FFDev) and a Dose-Adaptive detector Fine-Tuning strategy (DAFT). Experiments demonstrate our superiority in effectively predicting real-fake under the MID-FFD scenario \textbf{while} maintaining original generalization ability to unseen data.