Poster
ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping
Shun Iwase · Zubair Irshad · Katherine Liu · Vitor Guizilini · Robert Lee · Takuya Ikeda · Ayako Amma · Koichi Nishiwaki · Kris Kitani · Rares Andrei Ambrus · Sergey Zakharov
[
Abstract
]
Abstract:
Robotic grasping is a cornerstone capability of embodied systems. Many methods directly output grasps from partial information without modeling the geometry of the scene, leading to suboptimal motion and even collisions. To address these issues, we introduce ZeroGrasp, a novel framework that simultaneously performs 3D reconstruction and grasp pose prediction in near real-time. A key insight of our method is that occlusion reasoning and modeling the spatial relationships between objects is beneficial for both accurate reconstruction and grasping. We couple our method with a novel large-scale synthetic dataset, which is an order of magnitude larger than existing datasets and comprises M photo-realistic images, high-resolution 3D reconstructions and B physically-valid grasp pose annotations for K objects from the Objaverse-LVIS dataset. We evaluate ZeroGrasp on the GraspNet-1B benchmark as well as through real-world robot experiments. ZeroGrasp achieves state-of-the-art performance and generalizes to novel real-world objects even when trained only on synthetic data.
Live content is unavailable. Log in and register to view live content