From Failure to Feedback: Group Revision Unlocks Hard Cases in Object-Level Grounding
Abstract
Finetuning Large Vision-Language Models with reinforcement learning has emerged as a promising approach to enhance their capability in object-level grounding. However, existing methods, mainly based on GRPO, assign rewards at the response level. Such sparse reward leads to minimal learning signals when all candidate responses are failed in challenging scenarios.In this work, we propose a group-revision optimisation paradigm that enhances learning on hard cases. It begins with a sampled initial response and generates a set of revised candidates to explore improved grounding outcomes. Inspired by reward shaping, we introduce a consolidation process that quantifies each candidate’s improvement over the initial attempt and converts it into informative shaping signals.These signals are used to both refine the reward and modulate the advantage, amplifying the influence of high-quality revisions.Our method achieves consistent gains across referring and reasoning segmentation, REC, and counting benchmarks compared with prior GRPO-based models. Code will be released.