Skip to yearly menu bar Skip to main content


CAGE: Controllable Articulation GEneration

Jiayi Liu · Hou In Ivan Tam · Ali Mahdavi Amiri · Manolis Savva

Arch 4A-E Poster #317
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT


We address the challenge of generating 3D articulated objects in a controllable fashion. Currently, modeling articulated 3D objects is either achieved through laborious manual authoring, or using methods from prior work that are hard to scale and control directly. We leverage the interplay between part shape, connectivity, and motion using a denoising diffusion-based method with attention modules designed to extract correlations between part attributes. Our method takes an object category label and a part connectivity graph as input and generates an object's geometry and motion parameters. The generated objects conform to user-specified constraints on the object category, part shape, and part articulation. Our experiments show that our method outperforms the state-of-the-art in articulated object generation, producing more realistic objects while conforming better to user constraints.

Live content is unavailable. Log in and register to view live content