Physical Object Understanding with a Physically Controllable World Model
Abstract
A central challenge in visual intelligence is learning the physical structure of scenes from raw videos: how regions form objects and the laws that govern their interactions. Solving these tasks requires world models capable of inferring distributional states of the world from partial observations -- capabilities that current architectures do not provide. We introduce a new class of probabilistic world models that support estimation of the probability of any visual variable, such as appearance and dynamics, conditioned on any other variables. Here, we identify that these models can be trained efficiently with autoregressive sequence modeling, yielding world models from which rich object understanding emerges. First, we demonstrate that our model captures the physical laws governing how objects move by generating multiple plausible future states of the world through sequential inference. Then, by analyzing motion correlations across these futures, we extract coherent physical objects and articulated object subparts, achieving state-of-the-art results on SpelkeBench and DragAMove. Having discovered these objects, our world model can manipulate them in 3D, emerging as the strongest performer on 3DEditBench. Finally, we demonstrate that physical relationships between objects can be computed from the world model, enabling applications such as Visual Jenga.