Skip to yearly menu bar Skip to main content


Poster

VideoGEM: Training-free Action Grounding in Videos

Felix Vogel · Walid Bousselham · Anna Kukleva · Nina Shvetsova · Hilde Kuehne


Abstract:

Vision-language foundation models have shown impressive capabilities across various zero-shot tasks, including training-free localization and grounding, primarily focusing on localizing objects in images. However, leveraging those capabilities to localize actions and events in videos is challenging as actions have less physical outline and are usually described by higher-level concepts.In this work, we propose VideoGEM, the first training-free action grounding method based on pretrained image- and video-language backbones. Namely, we adapt the self-self attention formulation of GEM to activity grounding. By doing so, we observe that high-level semantic concepts, such as actions, usually emerge in the higher layers of the image- and video-language models. We, therefore, propose a layer weighting in the self-attention path to prioritize higher layers. Additionally, we introduce a dynamic weighting method to automatically tune layer weights to capture each layer’s relevance to a specific prompt. Finally, we introduce a prompt decomposition, processing action, verb, and object prompts separately, resulting in a better localization of actions. We evaluate the proposed approach on three image- and video-language backbones, CLIP, OpenCLIP, and ViCLIP, and on four video grounding datasets, V-HICO, DALY, YouCook-Interactions, and GroundingYouTube, showing that the proposed training-free approach is able to outperform current trained state-of-the-art approaches for video grounding.

Live content is unavailable. Log in and register to view live content