Skip to yearly menu bar Skip to main content


Poster

PARC: A Quantitative Framework Uncovering the Symmetries within Vision Language Models

Jenny Schmalfuss · Nadine Chang · Vibashan VS · Maying Shen · Andrés Bruhn · Jose M. Alvarez


Abstract:

Vision language models (VLMs) respond to user-crafted text prompts and visual inputs, and are applied to numerous real-world problems.VLMs integrate visual modalities with large language models (LLMs), which are well known to be prompt-sensitive.Hence, it is crucial determining whether VLMs inherit this instability to varying prompts.We therefore investigate which prompt variations VLMs are most sensitive to and which VLMs are most agnostic to prompt variations.To this end, we introduce PARC (Prompt Analysis via Reliability and Calibration), a VLM prompt sensitivity analysis framework built on three pillars: (1) plausible prompt variations in both the language and vision domain, (2) a novel model reliability score with built-in guarantees, and (3) a calibration step that enables dataset- and prompt-spanning prompt variation analysis.Regarding prompt variations, experimental results from PARC show that VLMs mirror LLM language prompt sensitivity in the vision domain, and most destructive variations are those that change the expected answer. Regarding models, outstandingly robust VLMs among 22 evaluated models come from the InternVL2 family.We further find indications that prompt sensitivity is linked more closely to training data than to model size.Code and datasets will be released.

Live content is unavailable. Log in and register to view live content