MindDriver: Introducing Progressive Multimodal Reasoning for Autonomous Driving
Abstract
Vision-Language Models (VLM) exhibit strong reasoning capabilities, showing promise for end-to-end autonomous driving systems. Chain-of-Thought (CoT), as VLM's widely used reasoning strategy, is facing critical challenges. Existing textual CoT has a large gap between text semantic space and trajectory physical space. Although the recent approach utilizes future image to replace text as CoT process, it lacks clear planning-oriented objective guidance to generate images with accurate scene evolution. To address these, we innovatively propose MindDriver, a progressive multimodal reasoning framework that enables VLM to imitate human-like progressive thinking for autonomous driving.MindDriver presents semantic understanding, semantic-to-physical space imagination, and physical-space trajectory planning.To achieve aligned reasoning processes in MindDriver, we develop a feedback-guided automatic data annotation pipeline to generate aligned multimodal reasoning training data. Furthermore, we develop a progressive reinforcement fine-tuningmethod to optimize the alignment through progressive high-level reward-based learning.MindDriver demonstrates superior performance in both nuScences open-loop and Bench2Drive closed-loop evaluation.Our trained model and codes will be released once accepted.