Stand-In: A Lightweight and Plug-and-Play Identity Control for Video Generation
Bowen Xue ⋅ Zheng-Peng Duan ⋅ Qixin Yan ⋅ Wenjing Wang ⋅ Hao Liu ⋅ Chunle Guo ⋅ Chongyi Li ⋅ Chen Li ⋅ Jing LYU
Abstract
Generating high-fidelity human videos that match user-specified identities is important yet challenging in the field of generative AI.Existing methods often rely on an excessive number of training parameters and lack compatibility with other AIGC tools.In this paper, we propose $\textbf{Stand-In}$, a lightweight and plug-and-play framework for identity preservation in video generation.Specifically, we introduce a conditional image branch into the pre-trained video generation model.Identity control is achieved through restricted self-attentions with conditional position mapping.Thanks to these designs, which greatly preserve the pretrained prior of the video generation model, our approach is able to outperform other full-parameter training methods in video quality and identity preservation, even with just $\sim$1\% additional parameters and only 2000 training pairs.Moreover, our framework can be seamlessly integrated for other tasks, such as subject-driven video generation, pose-referenced video generation, stylization, and face swapping.Code and dataset will be available to the community.
Successful Page Load