Learning to Focus and Precise Cropping:A Reinforcement Learning Framework with Information Gaps and Grounding Loss for MLLMs
Abstract
To enhance the perception and reasoning capabilities of multimodal large models (MLLMs) in complex visual scenes, recent research has introduced agent-based workflows. In these works, MLLMs autonomously utilize image cropping to analyze regions of interest for question answering. While existing training strategies, such as those employing supervised fine-tuning (SFT) and reinforcement learning (RL), have made significant progress, our empirical analysis reveals a key limitation. By adding random noise to the cropped images, we find that they still maintain most of the performance, especially for models using only reinforcement learning, indicating a heavy reliance on the global input and a weak dependence on details within the cropped region. To address this issue, we propose a novel two-stage reinforcement learning framework that does not require trajectory supervision. In the first stage, we introduce the "Information Gap" mechanism by adjusting the granularity of the global image. This mechanism trains the model to answer questions by focusing on cropped key regions, driven by the information gain these regions provide. The second stage further enhances cropping precision by incorporating a grounding loss, using a small number of bounding box annotations. Experiments show that our method significantly enhances the model's attention to clipped regions, enabling it to achieve state-of-the-art performance on high-resolution visual question-answering benchmarks. Our method provides a more efficient approach for perceiving and reasoning fine-grained details in MLLMs.