Skip to yearly menu bar Skip to main content


Poster

Visual Prompting for One-shot Controllable Video Editing without Inversion

Zhengbo Zhang · Yuxi Zhou · DUO PENG · Joo Lim · Zhigang Tu · De Soh Soh · Lin Geng Foo


Abstract:

One-shot controllable video editing (OCVE) is an important yet challenging task, aiming to propagate user edits that are made---using any image editing tool---on the first frame of a video to all subsequent frames, while ensuring content consistency between edited frames and source frames. To achieve this, prior methods employ DDIM inversion to transform source frames into latent noise, which is then fed into a pre-trained diffusion model, conditioned on the user-edited first frame, to generate the edited video. However, the DDIM inversion process accumulates errors, which hinder the latent noise from accurately reconstructing the source frames, ultimately compromising content consistency in the generated edited frames. To overcome it, our method eliminates the need for DDIM inversion by performing OCVE through a novel perspective based on visual prompting. Furthermore, inspired by consistency models that can perform multi-step consistency sampling to generate a sequence of content-consistent images, we propose a content consistency sampling (CCS) to ensure content consistency between the generated edited frames and the source frames. Moreover, we introduce a temporal-content consistency sampling (TCS) based on Stein Variational Gradient Descent to ensure temporal consistency across the edited frames. Extensive experiments validate the effectiveness of our approach.

Live content is unavailable. Log in and register to view live content