See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning
Shuoshuo Zhang ⋅ Yizhen Zhang ⋅ JINGJING FU ⋅ Lei Song ⋅ Jiang Bian ⋅ Yujiu Yang ⋅ Rui Wang
Abstract
Large vision–language models (VLMs) often benefit from intermediate visual cues, either injected via external tools or generated as latent visual tokens during reasoning, but these mechanisms still overlook fine-grained visual evidence (e.g., polylines in charts), generalize poorly across domains, and incur high inference-time cost. In this paper, we propose $\textbf{Bi}$-directional $\textbf{P}$erceptual $\textbf{S}$haping ($\textbf{BiPS}$), which transforms question-conditioned masked views into bidirectional where-to-look signals that shape perception during training. BiPS first applies a KL-consistency constraint between the original image and an evidence-preserving view that keeps only question-relevant regions, encouraging coarse but complete coverage of supporting pixels. It then applies a KL-separation constraint between the original and an evidence-ablated view where critical pixels are masked so the image no longer supports the original answer, discouraging text-only shortcuts (i.e., answering from text alone) and enforcing fine-grained visual reliance. Across eight benchmarks, BiPS boosts Qwen2.5-VL-7B by 8.2\% on average and shows strong out-of-domain generalization to unseen datasets and image types.
Successful Page Load