Skip to yearly menu bar Skip to main content


Poster

ConText-CIR: Learning from Concepts in Text for Composed Image Retrieval

Eric Xing · Pranavi Kolouju · Robert Pless · Abby Stylianou · Nathan Jacobs


Abstract:

Composed image retrieval (CIR) is the task of retrieving a target image specified by a query image and a relative text that describes a semantic modification to the query image. Existing methods in CIR struggle to accurately represent the image and the text modification, resulting in subpar performance. To address this limitation, we introduce a CIR framework, ConText-CIR, trained with a Text Concept-Consistency loss that encourages the representations of noun phrases in the text modification to better attend to the relevant parts of the query image. To support training with this loss function, we also propose a synthetic data generation pipeline that creates training data from existing CIR datasets or unlabeled images. We show that these components together enable stronger performance on CIR tasks, setting a new state-of-the-art in composed image retrieval in both the supervised and zero-shot settings on the CIRR and CIRCO datasets. Source code, model checkpoints, and our new datasets will be made available upon publication.

Live content is unavailable. Log in and register to view live content