Skip to yearly menu bar Skip to main content


Poster

Phoenix: A Motion-based Self-Reflection Framework for Fine-grained Robotic Action Correction

Xia Wenke · Ruoxuan Feng · Dong Wang · Di Hu


Abstract:

Building a generalizable self-correction system as human cognition is crucial for robots to recover from failures.Despite advancements in Multimodal Large Language Models (MLLMs) that empower robots with semantic reflection ability for failure, translating this semantic reflection into "how to correct" fine-grained robotic actions remains a significant challenge.To address this gap, we build the Phoenix framework, which leverages motion instruction as a bridge to connect high-level semantic reflection with low-level robotic action correction. In this motion-based self-reflection framework,we start with a dual-process motion adjustment mechanism with MLLMs to translate the semantic reflection into coarse-grained motion instruction adjustment. To leverage this motion instruction for guiding "how to correct" fine-grained robotic actions, a multi-task motion-conditioned diffusion policy is proposed to integrate visual observations for high-frequency robotic action correction.By combining these two models, we could shift the demand for generalization capability from the low-level manipulation policy to the MLLMs-driven motion refinement model and facilitate precise, fine-grained robotic action correction.Utilizing this framework, we further develop a continual learning method to automatically improve the model's capability from interactions with dynamic environments.The experiments conducted in both the RoboMimic simulation and real-world scenarios prove the superior generalization and robustness of our framework across a variety of manipulation tasks.

Live content is unavailable. Log in and register to view live content