SAGA: Source Attribution of Generative AI Videos
Rohit Kundu ⋅ Vishal Mohanty ⋅ Hao Xiong ⋅ Shan Jia ⋅ Athula Balachandran ⋅ Amit K. Roy-Chowdhury
Abstract
The proliferation of generative AI has led to hyper-realistic synthetic videos, escalating misuse risks and outstripping binary real/fake detectors. We introduce $\textcolor{blue}{\texttt{SAGA}}$ ($\underline{S}$ource $\underline{A}$ttribution of $\underline{G}$enerative $\underline{A}$I videos), the first comprehensive framework to address the urgent need for AI-generated $\textit{video source attribution}$ at a large scale. Unlike traditional detection, $\textcolor{blue}{\texttt{SAGA}}$ identifies the specific generative model used. It uniquely provides multi-granular attribution across five levels: authenticity, generation task (e.g., T2V/I2V), model version, development team, and the precise generator, offering far richer forensic insights. Our novel video transformer architecture, leveraging features from a robust vision foundation model, effectively captures spatio-temporal artifacts. Critically, we introduce a data-efficient pretrain-and-attribute strategy, enabling $\textcolor{blue}{\texttt{SAGA}}$ to achieve state-of-the-art attribution using only 0.5% of source-labeled data per class, matching fully supervised performance. Furthermore, we propose Temporal Attention Signatures ($\textcolor{blue}{\texttt{T-Sig}}$), a novel interpretability method that visualizes learned temporal differences, offering the first explanation for $\textit{why}$ different video generators are distinguishable. Extensive experiments on public datasets, including cross-domain scenarios, demonstrate that $\textcolor{blue}{\texttt{SAGA}}$ sets a new benchmark for synthetic video provenance, providing crucial, interpretable insights for forensic and regulatory applications.
Successful Page Load