VideoWorld 2: Learning Transferable Knowledge from Real-world Videos
Abstract
Learning transferable knowledge from unlabeled video data and applying it in new environments is a hallmark of advanced artificial intelligence. We present VideoWorld 2, which extends VideoWorld and offers the first investigation into learning transferable knowledge directly from raw real-world videos. At its core, VideoWorld 2 introduces a disentangled Latent Dynamics Model (dLDM) that decouples action dynamics from visual appearance: a pretrained video diffusion model handles appearance modeling, enabling the dLDM to learn latent codes that focus on compact and meaningful task-related changes. These latent codes are then modeled autoregressively as a sequence to learn task policies and support long-horizon reasoning. We evaluate VideoWorld 2 on real-world video handcraft making tasks, where prior video generation and latent-dynamics models struggle to operate reliably. VideoWorld 2 achieves over a 70% improvement in task success rate and produces coherent long execution videos. In robotics, we show that VideoWorld 2 can acquire effective manipulation knowledge from the Open-X dataset, which substantially improves task performance on CALVIN. This study reveals the potential of learning transferable world knowledge directly from raw videos, with all code, data, and models to be open-sourced for further research.