Skip to yearly menu bar Skip to main content


Poster

Segment Any Motion in Videos

Nan Huang · Wenzhao Zheng · Chenfeng Xu · Kurt Keutzer · Shanghang Zhang · Angjoo Kanazawa · Qianqian Wang

ExHall D Poster #309
[ ]
Fri 13 Jun 8:30 a.m. PDT — 10:30 a.m. PDT

Abstract:

Moving object segmentation is a crucial task for achieving a high-level understanding of visual scenes and has numerous downstream applications. Humans can effortlessly segment moving objects in videos. Previous work has largely relied on optical flow to provide motion cues; however, this approach often results in imperfect predictions due to challenges such as partial motion, complex deformations, motion blur and background distractions. We propose a novel approach for moving object segmentation that combines long-range trajectory motion cues with DINO-based semantic features and leverages SAM2 for pixel-level mask densification through an iterative prompting strategy. Our model employs Spatio-Temporal Trajectory Attention and Motion-Semantic Decoupled Embedding to prioritize motion while integrating semantic support. Extensive testing on diverse datasets demonstrates state-of-the-art performance, excelling in challenging scenarios and fine-grained segmentation of multiple objects.

Live content is unavailable. Log in and register to view live content