Paper
in
Workshop: AI for Creative Visual Content Generation, Editing and Understanding
WaveDIF: Wavelet sub-band based Deepfake Identification in Frequency Domain
Anurag Dutta · Arnab Kumar Das · Ruchira Naskar · Rajat Subhra Chakraborty
With the more realistic convergence of Deepfakes, its’ identification becomes more demanding. Recently, numerous deepfake detection techniques have been proposed, most of which are in the spatio-temporal domain. While these methods have shown promise, many of them neglect convincing artifacts that exhibit different patterns across frequency domains. This research proposes WaveDIF, a strict frequency domain, lightweight deepfake video detection algorithm using wavelet sub-band energies. In WaveDIF, for feature extraction, each video undergoes a Discrete Fourier Transform to filter out high-frequency noisy details (quite evident in deepfakes). These representations are then decomposed into their respective wavelet sub-bands --LL (Low-Low), LH (Low-High), HL (High-Low), and HH (High-High) passing them through a Haar Filter, following which the energy values (particular to each sub-band) are computed. These energy values are then used to learn a linear decision boundary (using regression analysis), which is then used for classification. This enables an interpretable, lightweight deterministic technique for the detection of synthesized videos, besides achieving an accuracy comparable to the state-of-the-art. Experimental results on popular deepfake video datasets shows over 92% accuracy for in-dataset evaluation, and 88% accuracy for cross dataset evaluation.