GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding
Abstract
Video Temporal Grounding (VTG) is a critical task in video understanding and a key capability for extending Video Large Language Models (Vid-LLMs) to broader applications. However, existing Vid-LLMs rely on uniform frame sampling to extract video information, resulting in a sparse distribution of key frames and the loss of crucial temporal cues.To address this limitation, we propose Grounded Visual Token Sampling (GroundVTS), a Vid-LLM architecture that focuses on the most informative temporal segments. GroundVTS employs a fine-grained, query-guided mechanism to filter visual tokens before feeding them into the LLM, thereby preserving essential spatio-temporal information and maintaining temporal coherence. Futhermore, we introduce a progressive optimization strategy that enables the LLM to effectively adapt to the non-uniform distribution of visual features, enhancing its ability to model temporal dependencies and achieve precise video localization. We comprehensively evaluate our model on three standard VTG benchmarks, where GroundVTS outperforms state-of-the-art methods, achieving a +7.7\% mIoU improvement on moment retrieval and +12.0\% mAP on highlight detection.Code will be publicly available.