Skip to yearly menu bar Skip to main content


Doubly Right Object Recognition: A Why Prompt for Visual Rationales

Chengzhi Mao · Revant Teotia · Amrutha Sundar · Sachit Menon · Junfeng Yang · Xin Wang · Carl Vondrick

West Building Exhibit Halls ABC 259


Many visual recognition models are evaluated only on their classification accuracy, a metric for which they obtain strong performance. In this paper, we investigate whether computer vision models can also provide correct rationales for their predictions. We propose a “doubly right” object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales. We find that state-of-the-art visual models, such as CLIP, often provide incorrect rationales for their categorical predictions. However, by transferring the rationales from language models into visual representations through a tailored dataset, we show that we can learn a “why prompt,” which adapts large visual representations to produce correct rationales. Visualizations and empirical experiments show that our prompts significantly improve performance on doubly right object recognition, in addition to zero-shot transfer to unseen tasks and datasets.

Chat is not available.