Particulate: Feed-Forward 3D Object Articulation
Abstract
We introduce Particulate, a feed-forward model that, given a single static 3D mesh of an everyday object, predicts its 3D parts, kinematic structure, and articulation parameters.Unlike prior work on articulated 3D object modeling that is limited by costly per-object optimization and small retrieval databases or requires large vision or language foundation models, our approach is based on a flexible, scalable and lightweight transformer architecture.Trained on a diverse collection of articulated 3D assets from public datasets, Particulate accurately infers the articulated structure of novel objects, including those generated by image-to-3D models, in a single feed-forward pass.We further introduce a benchmark for articulated 3D object estimation curated from high-quality public 3D assets.Quantitative and qualitative results show that Particulate significantly outperforms state-of-the-art approaches.