Skip to yearly menu bar Skip to main content


Oral

Stereo4D: Learning How Things Move in 3D from Internet Stereo Videos

Linyi Jin · Richard Tucker · Zhengqi Li · David Fouhey · Noah Snavely · Aleksander Holynski

Karl Dean Grand Ballroom
[ ] [ Visit Oral Session 3A: 3D Computer Vision ] [ Paper ]
Sat 14 Jun 7:15 a.m. — 7:30 a.m. PDT

Abstract:

Learning to understand dynamic 3D scenes from imagery is crucial for applications ranging from robotics to scene reconstruction. Yet, unlike other problems where large-scale supervised training has enabled rapid progress, directly supervising methods for recovering 3D motion remains challenging due to the fundamental difficulty of obtaining ground truth annotations. We present a system for mining high-quality 4D reconstructions from internet stereoscopic, wide-angle videos. Our system fuses and filters the outputs of camera pose estimation, stereo depth estimation, and temporal tracking methods into high-quality dynamic 3D reconstructions. We use this method to generate large-scale data in the form of world-consistent, pseudo-metric 3D point clouds with long-term motion trajectories. We demonstrate the utility of this data by training a variant of DUSt3r to predict structure and 3D motion from real-world image pairs, showing that training on our reconstructed data enables generalization to diverse real-world scenes.

Chat is not available.