Skip to yearly menu bar Skip to main content


ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts

Mu Cai · Haotian Liu · Siva Mustikovela · Gregory P. Meyer · Yuning Chai · Dennis Park · Yong Jae Lee

Arch 4A-E Poster #317
[ ] [ Project Page ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT


While existing large vision-language multimodal models focus on whole image understanding, there is a prominent gap in achieving region-specific comprehension. Current approaches that use textual coordinates or spatial encodings often fail to provide a user-friendly interface for visual prompting. To address this challenge, we introduce a novel multimodal model capable of decoding arbitrary (free-form) visual prompts. This allows users to intuitively mark images and interact with the model using natural cues like a red bounding box or pointed arrow. Our simple design directly overlays visual markers onto the RGB image, eliminating the need for complex region encodings, yet achieves state-of-the-art performance on region-understanding tasks like Visual7W, PointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present RegionBench, a comprehensive benchmark to assess the capability of models in understanding visual prompts across multiple dimensions, enabling future research in this domain. Code and demo will be released.

Live content is unavailable. Log in and register to view live content