Skip to yearly menu bar Skip to main content


Poster

Encapsulated Composition of Text-to-Image and Text-to-Video Models for High-Quality Video Synthesis

Tongtong Su · Chengyu Wang · Bingyan Liu · Jun Huang · Dongming Lu


Abstract:

In recent years, large text-to-video (T2V) synthesis models have garnered considerable attention for their abilities to generate videos from textual descriptions. However, achieving both high imaging quality and effective motion representation remains a significant challenge for these T2V models. Existing approaches often adapt pre-trained text-to-image (T2I) models to refine video frames, leading to issues such as flickering and artifacts due to inconsistencies across frames. In this paper, we introduce \emph{EVS}, a training-free \underline{E}ncapsulated \underline{V}ideo \underline{S}ynthesizer that composes T2I and T2V models to enhance both visual fidelity and motion smoothness of generated videos. Our approach utilizes a well-trained diffusion-based T2I model to refine low-quality video frames by treating them as out-of-distribution samples, effectively optimizing them with noising and denoising steps. Meanwhile, we employ T2V backbones to ensure consistent motion dynamics. By encapsulating the T2V temporal-only prior into the T2I generation process, \emph{EVS} successfully leverages the strengths of both types of models, resulting in videos of improved imaging and motion quality. Experimental results validate the effectiveness of our approach compared to previous approaches.Our composition process also leads to a significant improvement of 1.6x-4.5x speedup in inference time.~\footnote{Source codes will be released upon paper acceptance.}

Live content is unavailable. Log in and register to view live content