Poster
CALICO: Multi-Image Pixel-Grounded Object Comparison by Parts with Large Language Models
Kiet A. Nguyen · Adheesh Juvekar · Tianjiao Yu · Muntasir Wahed · Ismini Lourentzou
[
Abstract
]
Abstract:
Recent advances in Large Vision-Language Models (LVLMs) advances have sparked significant progress in general-purpose vision tasks through visual instruction tuning. While some works have demonstrated the capability of LVLMs to generate segmentation masks that align phrases with natural language descriptions in a single image, they struggle with segmentation-grounded comparisons across multiple images, particularly at finer granularities such as object parts. In this paper, we introduce the new task of *part-focused semantic co-segmentation*, which seeks to identify and segment common and unique objects and parts across multiple images. To address this task, we present CALICO, the first LVLM that can segment and reason over multiple masks across images, enabling object comparison based on their constituent parts. CALICO features two novel components, a Correspondence Extraction Module, which captures semantic-rich information to identify part-level correspondences between objects, and a Correspondence Adaptation Module, which embeds this information into the LLM and facilitates multi-image understanding in a parameter-efficient manner. To support training and evaluation, we curate MIXEDPARTS, a comprehensive multi-image segmentation dataset containing 2.4M samples across 44K images with diverse object and part categories. Experimental results show CALICO, finetuned on only 0.3\% of its architecture, achieves robust performance in part-focused semantic co-segmentation. Code, models, and data are available at anon.link.
Live content is unavailable. Log in and register to view live content