Skip to yearly menu bar Skip to main content


Poster

VideoHandles: Editing 3D Object Compositions in Videos Using Video Generative Priors

Juil Koo · Paul Guerrero · Chun-Hao P. Huang · Duygu Ceylan · Minhyuk Sung


Abstract:

Generative methods for image and video editing use generative models as priors to perform edits despite incomplete information, such as changing the composition of 3D objects shown in a single image. Recent methods have shown promising composition editing results in the image setting, but in the video setting, editing methods have focused on editing object appearance, object motion, or camera motion, and as a result, methods to edit object composition in videos are still missing. We propose VideoHandles as a method for editing 3D object compositions in videos of static scenes with camera motion. Our approach allows editing the 3D position of a 3D object across all frames of a video in a temporally consistent manner. This is achieved by lifting intermediate features of a generative model to a 3D reconstruction that is shared between all frames, editing the reconstruction, and projecting the features on the edited reconstruction back to each frame. To the best of our knowledge, this is the first generative approach to edit object compositions in videos. Our approach is simple and training-free, while outperforming state-of-the-art image editing baselines.

Live content is unavailable. Log in and register to view live content