Skip to yearly menu bar Skip to main content


Poster

DeCafNet: Delegate and Conquer for Efficient Temporal Grounding in Long Videos

Zijia Lu · ASM Iftekhar · Gaurav Mittal · Tianjian Meng · Xiawei Wang · Cheng Zhao · Rohith Kukkala · Ehsan Elhamifar · Mei Chen


Abstract:

Long Video Temporal Grounding (LVTG) aims at identifying specific moments within lengthy videos based on user-provided text queries for effective content retrieval. The approach taken by existing methods of dividing video into clips and processing each clip via a full-scale expert encoder is challenging to scale due to prohibitive computational costs of processing a large number of clips in long videos. To address this issue, we introduce DeCafNet, an approach employing "delegate-and-conquer" strategy to achieve computation efficiency without sacrificing grounding performance. DeCafNet introduces a sidekick encoder that performs dense feature extraction over all video clips in a resource-efficient manner, while generating a saliency map to identify the most relevant clips for full processing by the expert encoder. To effectively leverage features from sidekick and expert encoders that exist at different temporal resolutions, we introduce DeCaf-Grounder, which unifies and refines them via query-aware temporal aggregation and multi-scale temporal refinement for accurate grounding. Experiments on two LTVG benchmark datasets demonstrate that DeCafNet reduces computation by up to 47% while still outperforming existing methods, establishing a new state-of-the-art for LTVG in terms of both efficiency and performance. Code and model will be released upon acceptance.

Live content is unavailable. Log in and register to view live content