Skip to yearly menu bar Skip to main content


Poster

Extrapolating and Decoupling Image-to-Video Generation Models: Motion Modeling is Easier Than You Think

Jie Tian · Xiaoye Qu · Zhenyi Lu · Wei Wei · Sichen Liu · Yu Cheng


Abstract:

Image-to-Video (I2V) generation aims to synthesize a video clip according to a given image and condition (e.g., text). The key challenge of this task lies in simultaneously generating natural motions while preserving the original appearance of the images.However, current I2V diffusion models (I2V-DMs) often produce videos with limited motion degrees or exhibit uncontrollable motion that conflicts with the textual condition. In this paper, we propose a novel Extrapolating and Decoupling framework to mitigate these issues. Specifically, our framework consists of three separate stages:(1) Starting with a base I2V-DM, we explicitly inject the textual condition into the temporal module using a lightweight, learnable adapter and fine-tune the integrated model to improve motion controllability. (2) We introduce a training-free extrapolation strategy to amplify the dynamic range of the motion, effectively reversing the fine-tuning process to enhance the motion degree significantly.(3) With the above two-stage models excelling in motion controllability and motion degree, we decouple the relevant parameters associated with each type of motion ability and inject them into the base I2V-DM. Since the I2V-DM handles different levels of motion controllability and dynamics at various denoising time steps, we adjust the motion-aware parameters accordingly over time. Extensive qualitative and quantitative experiments have been conducted to demonstrate the superiority of our framework over existing methods.

Live content is unavailable. Log in and register to view live content