Skip to yearly menu bar Skip to main content


LAMP: Learn A Motion Pattern for Few-Shot Video Generation

Rui-Qi Wu · Liangyu Chen · Tong Yang · Chun-Le Guo · Chongyi Li · Xiangyu Zhang

Arch 4A-E Poster #223
[ ] [ Project Page ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: In this paper, we present a few-shot text-to-video framework, **LAMP**, which enables a text-to-image diffusion model to **L**earn **A** specific **M**otion **P**attern with 8 $\sim$16 videos on a single GPU. Unlike existing methods, which require a large number of training resources or learn motions that are precisely aligned with template videos, it achieves a trade-off between the degree of generation freedom and the resource costs for model training. Specifically, we design a motion-content decoupled pipeline that uses an off-the-shelf text-to-image model for content generation so that our tuned video diffusion model mainly focuses on motion learning. The well-developed text-to-image techniques can provide visually pleasing and diverse content as generation conditions, which highly improves video quality and generation freedom. To capture the features of temporal dimension, we expand the pre-trained 2D convolution layers of the T2I model to our novel temporal-spatial motion learning layers and modify the attention blocks to the temporal level. Additionally, we develop an effective inference trick, shared-noise sampling, which can improve the stability of videos without computational costs. Our method can also be flexibly applied to other tasks, e.g. real-world image animation and video editing. Extensive experiments demonstrate that LAMP can effectively learn the motion pattern on limited data and generate high-quality videos. The code and models are available at

Live content is unavailable. Log in and register to view live content