Skip to yearly menu bar Skip to main content


Poster

MagicArticulate: Make Your 3D Models Articulation-Ready

Chaoyue Song · Jianfeng Zhang · Xiu Li · Fan Yang · Yiwen Chen · Zhongcong Xu · Jun Hao Liew · Xiaoyang Guo · Fayao Liu · Jiashi Feng · Guosheng Lin


Abstract:

With the explosive growth of 3D content creation, there is an increasing demand for automatically converting static 3D models into articulation-ready versions that support realistic animation. Traditional approaches rely heavily on manual annotation, which is both time-consuming and labor-intensive. Moreover, the lack of large-scale benchmarks has hindered the development of learning-based solutions. In this work, we present MagicArticulate, an effective framework that automatically transforms static 3D models into articulation-ready assets. Our key contributions are threefold. First, we introduce Articulation-XL, a large-scale benchmark containing over 33k 3D models with high-quality articulation annotations, carefully curated from Objaverse-XL. Second, we propose a novel skeleton generation method that formulates the task as a sequence modeling problem, leveraging an auto-regressive transformer to naturally handle varying numbers of bones or joints within skeletons and their inherent dependencies across different 3D models. Third, we predict skinning weights using a functional diffusion process that incorporates volumetric geodesic distance priors between vertices and joints. Extensive experiments demonstrate that MagicArticulate significantly outperforms existing methods across diverse object categories, achieving high-quality articulation that enables realistic animation. We will release our dataset and model to support further research.

Live content is unavailable. Log in and register to view live content