Poster
RoboGround: Robot Manipulation with Grounded Vision-Language Priors
Haifeng Huang · Xinyi Chen · Yilun Chen · Hao Li · Xiaoshen Han · zehan wang · Tai Wang · Jiangmiao Pang · Zhou Zhao
Recent advancements in robot manipulation have highlighted the potential of intermediate representations for improving policy generalization. In this work, we explore grounding masks as an effective intermediate representation, balancing two key advantages: (1) effective spatial guidance that specifies target objects and placement areas while also conveying information about object shape and size, enabling low-level policies to accurately interpret spatial information, and (2) broad generalization potential driven by large-scale vision-language models pretrained on diverse grounding datasets. We introduce RoboGround, a grounding-aware robotic policy that leverages grounding masks as an intermediate representation to guide policy networks in object manipulation tasks. To further explore and enhance generalization, we propose an automated pipeline for generating large-scale, simulated data with featuring a diverse set of objects and instructions. Extensive experiments show the value of our dataset and the effectiveness of grounding masks as intermediate guidance, significantly enhancing the generalization abilities of robot policies.
Live content is unavailable. Log in and register to view live content