Skip to yearly menu bar Skip to main content


Poster

Articulated Motion Distillation from Video Diffusion Models

Xuan Li · Qianli Ma · Tsung-Yi Lin · Yongxin Chen · Chenfanfu Jiang · Ming-Yu Liu · Donglai Xiang


Abstract:

We present Articulated Motion Distillation (AMD), a framework for generating high-fidelity character animations by merging the strengths of skeleton-based animation and modern generative models. AMD uses a skeleton-based representation for rigged 3D assets, drastically reducing the Degrees of Freedom (DoFs) by focusing on joint-level control, which allows for efficient, consistent motion synthesis. Through Score Distillation Sampling (SDS) with pre-trained video diffusion models, AMD distills complex, articulated motions while maintaining structural integrity, overcoming challenges faced by 4D neural deformation fields in preserving shape consistency. This approach is naturally compatible with physics-based simulation, ensuring physically plausible interactions. Experiments show that AMD achieves superior 3D consistency and expressive motion quality compared with existing works on text-to-4D generation.

Live content is unavailable. Log in and register to view live content