ActivityForensics: A Comprehensive Benchmark for Localizing Manipulated Activity in Videos
Abstract
Temporal forgery localization aims to temporally identify manipulated segments in untrimmed videos. Most existing benchmarks focus on appearance-level forgeries, such as face swapping and object removal. However, recent advances in video generation have driven the emergence of activity-level forgeries that modify human actions to distort event semantics, resulting in highly deceptive forgeries that critically undermine media authenticity and public trust. To address this issue, we introduce ActivityForensics, the first large-scale benchmark for localizing manipulated activity in untrimmed videos. It contains over 6K forgery video segments that are seamlessly blended into the video context, rendering high visual consistency that makes them almost indistinguishable from authentic content to the human eye. We further propose Temporal Artifact Diffuser (TADiff), a simple yet effective baseline that enhances artifact cues through a diffusion-based feature regularizer. Based on ActivityForensics, we introduce comprehensive evaluation protocols covering intra-domain, cross-domain, and open-world settings, and benchmark a wide range of state-of-the-art forgery localizers to facilitate future research. The dataset, code, and pretrained models will be made publicly available.