Skip to yearly menu bar Skip to main content


Poster

Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning

Chenjie Hao · Weyl Lu · Yifan Xu · Yubei Chen


Abstract:

An embodied system must not only model the patterns of the external world but also understand its own motion dynamics. A motion dynamic model is essential for efficient skill acquisition and effective planning. In this work, we introduce Neural Motion Simulator (MoSim), a world model that predicts the physical future state of an embodied system based on current observations and actions. MoSim achieves state-of-the-art performance in physical state prediction, also provides competitive performance across a range of downstream tasks. This model enables embodied systems to perform long-horizon predictions, facilitating efficient skill acquisition in imagined environments and even enabling zero-shot reinforcement learning learning. Furthermore, MoSim can transform any model-free reinforcement learning (RL) algorithm into a model-based approach, effectively decoupling the physical environment modeling from RL algorithm development. This separation allows for independent advancements in RL algorithms and world modeling, significantly improving sample efficiency and enhancing generalization capabilities. Our findings highlight that modeling world models for motion dynamics is a promising direction for developing more versatile and capable embodied systems.

Live content is unavailable. Log in and register to view live content