Skip to yearly menu bar Skip to main content


Poster

Omnimatte3D: Associating Objects and Their Effects in Unconstrained Monocular Video

Mohammed Suhail · Erika Lu · Zhengqi Li · Noah Snavely · Leonid Sigal · Forrester Cole

West Building Exhibit Halls ABC 060

Abstract:

We propose a method to decompose a video into a background and a set of foreground layers, where the background captures stationary elements while the foreground layers capture moving objects along with their associated effects (e.g. shadows and reflections). Our approach is designed for unconstrained monocular videos, with arbitrary camera and object motion. Prior work that tackles this problem assumes that the video can be mapped onto a fixed 2D canvas, severely limiting the possible space of camera motion. Instead, our method applies recent progress in monocular camera pose and depth estimation to create a full, RGBD video layer for the background, along with a video layer for each foreground object. To solve the underconstrained decomposition problem, we propose a new loss formulation based on multi-view consistency. We test our method on challenging videos with complex camera motion and show significant qualitative improvement over current approaches.

Chat is not available.