Skip to yearly menu bar Skip to main content


Poster

Temporal Alignment-Free Video Matching for Few-shot Action Recognition

SuBeen Lee · WonJun Moon · Hyun Seok Seong · Jae-Pil Heo


Abstract:

Few-Shot Action Recognition (FSAR) aims to train a model with only a few labeled video instances.A key challenge in FSAR is handling divergent narrative trajectories for precise video matching.While the frame- and tuple-level alignment approaches have been promising, their methods heavily rely on pre-defined and length-dependent alignment units (e.g., frames or tuples), which limits flexibility for actions of varying lengths and speeds. In this work, we introduce a novel TEmporal Alignment-free Matching (TEAM) approach, which eliminates the need for temporal units in action representation and brute-force alignment during matching.Specifically, TEAM represents each video with a fixed set of pattern tokens that capture globally discriminative clues within the video instance regardless of action length or speed, ensuring its flexibility.Furthermore, TEAM is inherently efficient, using token-wise comparisons to measure similarity between videos, unlike existing methods that rely on pairwise comparisons for temporal alignment. Additionally, we propose an adaptation process that identifies and removes common information across novel classes, establishing clear boundaries even between novel categories. Extensive experiments demonstrate the effectiveness of TEAM.

Live content is unavailable. Log in and register to view live content