Skip to yearly menu bar Skip to main content


Poster

Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation

Xiaoqi Li · Lingyun Xu · Mingxu Zhang · Jiaming Liu · Yan Shen · Iaroslav Ponomarenko · Jiahui Xu · Liang Heng · Siyuan Huang · Shanghang Zhang · Hao Dong


Abstract:

In robotic manipulation, task goals can be conveyed through various modalities, such as language, goal images, and goal videos. However, natural language can be ambiguous, while images or videos may offer overly detailed specifications. To address these challenges, we propose a novel approach using comprehensive multi-modal prompts that explicitly convey both low-level actions and high-level planning in a simple manner.Specifically, for each key-frame in the task sequence, our method allows for manual or automatic generation of simple and expressive 2D visual prompts overlaid on RGB images. These prompts represent the required task goals, such as the end-effector pose and the desired movement direction after contact. We develop a training strategy that enables the model to interpret these visual-language prompts and predict the corresponding contact poses and movement directions in SE(3) space.Furthermore, by sequentially executing all key-frame steps, the model can complete long-horizon tasks. This approach not only helps the model explicitly understand the task objectives but also enhances its robustness on unseen tasks by providing easily interpretable prompts.We evaluate our method in both simulated and real-world environments, demonstrating its robust manipulation capabilities.

Live content is unavailable. Log in and register to view live content