Skip to yearly menu bar Skip to main content


Poster

See, Say, and Segment: Correcting False Premises with LMMs

Tsung-Han Wu · Giscard Biamby · David Chan · Lisa Dunlap · Ritwik Gupta · Xudong Wang · Trevor Darrell · Joseph Gonzalez


Abstract:

Current open-source Large Multimodal Models (LMMs) excel at tasks such as open-vocabulary language grounding and segmentation but can suffer under false premises, when queries imply the existence of something that is not actually present in the image. We observe that existing methods which fine-tune a LMM to segment images significantly degrade their ability to reliably determine ("see") if an object is present, a form of catastrophic forgetting. In this work, we propose cascading and/or joint training LMMs to solve this task, avoiding catastrophic forgetting of previous skills. Our resulting model can "see" by detecting whether objects are present in an image, "say" by telling the user if they are not, proposing alternative queries or correcting semantic errors in the query, and finally``segment'' by outputting the mask of the desired objects if they exist. We introduce a novel dataset and benchmark of false premise queries for existing RefCOCO(+/g) data (which we call FP-RefCOCO(+/g)), and demonstrate that our method not only detects false premises up to 55\% better than existing approaches, but under false premise conditions produces relative cIOU improvements of more than 31\% over baselines, and produces natural language feedback judged helpful up to 67\% of the time.

Live content is unavailable. Log in and register to view live content