VIRST: Video-Instructed Reasoning Assistant for SpatioTemporal Segmentation
Abstract
Referring Video Object Segmentation (RVOS) aims to segment target objects in videos based on natural language descriptions. However, CLIP based and keyframe based approaches that couple a vision language model with a separate propagation module often fail to capture rapidly changing spatiotemporal dynamics and to handle queries that require multi step reasoning, which leads to sharp performance drops on motion intensive and reasoning oriented videos beyond static RVOS benchmarks. To address these limitations, we propose VIRST (Video-Instructed Reasoning assistant for Spatio-Temporal Segmentation), an end-to-end framework that unifies global video reasoning and pixel level mask prediction within a single model. VIRST bridges semantic and segmentation representations through the Spatio-Temporal Fusion (STF), which fuses segmentation aware video features into the vision language backbone, and employs the Temporal Dynamic Anchor Updater (TDAU) to maintain dynamically updated anchor frames that provide stable temporal cues under large motion, occlusion, and reappearance. This unified design achieves state-of-the-art results across diverse RVOS benchmarks under realistic and challenging conditions, demonstrating strong generalization to both referring and reasoning oriented settings.