Skip to yearly menu bar Skip to main content


Towards Surveillance Video-and-Language Understanding: New Dataset Baselines and Challenges

Tongtong Yuan · Xuange Zhang · Kun Liu · Bo Liu · Chen Chen · Jian Jin · Zhenzhen Jiao

Arch 4A-E Poster #241
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Surveillance videos are important for public security. However, current surveillance video tasks mainly focus on classifying and localizing anomalous events. Existing methods are limited to detecting and classifying the predefined events with unsatisfactory semantic understanding, although they have obtained considerable performance. To address this issue, we propose a new research direction of surveillance video-and-language understanding (VALU), and construct the first multimodal surveillance video dataset. We manually annotate the real-world surveillance dataset UCFCrime with fine-grained event content and timing. Our newly annotated dataset, UCA (UCF-Crime Annotation), contains 23,542 sentences, with an average length of 20 words, and its annotated videos are as long as 110.7 hours. Furthermore, we benchmark SOTA models for four multimodal tasks on this newly created dataset, which serve as new baselines for surveillance VALU. Through experiments, we find that mainstream models used in previously public datasets perform poorly onsurveillance video, demonstrating new challenges in surveillance VALU. We also conducted experiments on multimodal anomaly detection. These results demonstrate that our multimodal surveillance learning can improve the performance of anomaly detection. All the experiments highlight the necessity of constructing this dataset to advance surveillance AI.

Live content is unavailable. Log in and register to view live content