HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models
Abstract
Vision-Language-Action (VLA) models have recently enabled robotic manipulation by grounding visual and linguistic cues into actions. However, most VLAs assume the Markov property, relying only on the current observation and thus suffering from temporal myopia that degrades long-horizon coherence. Existing attempts to incorporate history by stacking frames are computationally expensive and redundant. We argue that motion provides a more compact and informative representation of temporal context, capturing inter-state dynamics while filtering static noise. Building on this idea, we propose HiF-VLA (Hindsight, Insight, and Foresight for VLAs), a unified framework that leverages motion for bidirectional temporal reasoning. HiF-VLA encodes past dynamics through hindsight priors, anticipates future motion via foresight reasoning, and integrates both through a hindsight-modulated joint expert to enable “think-while-acting” control. Extensive experiments show that HiF-VLA improves performance from 94.0\% to 96.4\% on LIBERO-Long and 4.10 to 4.35 on CALVIN ABC-D, surpassing strong baselines. Furthermore, HiF-VLA achieves substantial improvements in real-world long-horizon manipulation tasks, demonstrating its broad effectiveness in real-world long-horizon settings.