Poster
ReSpec: Relevance and Specificity Grounded Online Filtering for Learning on Video-Text Data Streams
Chris Dongjoo Kim · Jihwan Moon · Sangwoo Moon · Heeseung Yun · Sihaeng Lee · Aniruddha Kembhavi · Soonyoung Lee · Gunhee Kim · Sangho Lee · Christopher Clark
[
Abstract
]
Abstract:
The rapid growth of video-text data presents challenges in storage and computation during training. Online learning, which processes streaming data in real-time, offers a promising solution to these issues while also allowing swift adaptations in scenarios demanding real-time responsiveness.One strategy to enhance the efficiency and effectiveness of learning involves identifying and prioritizing data that enhances performance on target downstream tasks. We propose levance and ificity-based online filtering framework () that selects data based on four criteria: (i) modality alignment for clean data, (ii) task relevance for target focused data, (iii) specificity for informative and detailed data, and (iv) efficiency for low-latency processing. Relevance is determined by the probabilistic alignment of incoming data with downstream tasks, while specificity employs the distance to a root embedding representing the least specific data as an efficient proxy for informativeness.By establishing reference points from target task data, ReSpec filters incoming data in real-time, eliminating the need for extensive storage and compute.Evaluating on large-scale datasets WebVid2M and VideoCC3M, ReSpec attains state-of-the-art performance on five zero-shot video retrieval tasks, using as little as 5\% of the data while incurring minimal compute.
Live content is unavailable. Log in and register to view live content