Skip to yearly menu bar Skip to main content


VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation

XuDong Wang · Ishan Misra · Ziyun Zeng · Rohit Girdhar · Trevor Darrell

Arch 4A-E Poster #313
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract: Existing approaches to unsupervised video instance segmentation typically rely on motion estimates and experience difficulties tracking small or divergent motions. We present VideoCutLER, a simple method for unsupervised multi-instance video segmentation without using motion-based learning signals like optical flow or training on natural videos. Our key insight is that using high-quality pseudo masks and a simple video synthesis method for model training is surprisingly sufficient to enable the resulting video model to effectively segment and track multiple instances across video frames. We show the first competitive unsupervised learning results on the challenging YouTubeVIS-2019 benchmark, achieving 50.7\% AP$^{\text{video}}_{50}$, surpassing the previous state-of-the-art by a large margin. VideoCutLER can also serve as a strong pretrained model for supervised video instance segmentation tasks, exceeding DINO by 15.9\% on YouTubeVIS-2019 in terms of AP$^{\text{video}}$.

Live content is unavailable. Log in and register to view live content