Skip to yearly menu bar Skip to main content


Poster

BIMBA: Selective-Scan Compression for Long-Range Video Question Answering

Md Mohaiminul Islam · Tushar Nagarajan · Huiyu Wang · Gedas Bertasius · Lorenzo Torresani


Abstract:

Video Question Answering (VQA) in long videos poses the key challenge of extracting relevant information and modeling long-range dependencies from many redundant frames. The self-attention mechanism provides a general solution for sequence modeling, but it has a prohibitive cost when applied to a massive number of spatiotemporal tokens in long videos. To lower the computational cost, most prior methods rely on compression strategies, such as reducing the input length via sparse frame sampling or compressing the output sequence passed to the large language model (LLM) via space-time pooling. However, these naive approaches over-represent redundant information and often miss salient events or fast-occurring space-time patterns. In this work, we introduce \model, an efficient state-space model to handle long-form videos. Our model leverages the selective scan algorithm to learn to effectively select critical information from high-dimensional video and transform it into a token sequence that is orders of magnitude smaller for efficient LLM processing. Extensive experiments demonstrate that \model\ achieves state-of-the-art accuracy on multiple long-form VQA benchmarks, including EgoSchema, NextQA, TempCompass, and MVBench.

Live content is unavailable. Log in and register to view live content