Poster
UNICL-SAM: Uncertainty-Driven In-Context Segmentation with Part Prototype Discovery
Dianmo Sheng · Dongdong Chen · Zhentao Tan · Qiankun Liu · Qi Chu · Tao Gong · Bin Liu · Jing Han · Wenbin Tu · Shengwei Xu · Nenghai Yu
Recent advancements in in-context segmentation generalists have demonstrated significant success in performing various image segmentation tasks using a limited number of labeled example images. However, real-world applications present challenges due to the variability of support examples, which often exhibit quality issues resulting from various sources and inaccurate labeling. How to extract more robust representations from these examples has always been one of the goals of in-context visual learning. In response, we propose UNICL-SAM, to better model the example distribution and extract robust representations to help in-context segmentation. We incorporate an uncertainty probabilistic module to quantify each example's reliability during both the training and testing phases. Utilizing this uncertainty estimation, we introduce an uncertainty-guided graph augmentation and feature refinement strategy, aimed at mitigating the impact of high-uncertainty regions to enhance the learning of robust representations. Subsequently, we construct prototypes for each example by aggregating part information, thereby creating reliable in-context instruction that effectively represents fine-grained local semantics. This approach serves as a valuable complement to traditional global pooling features. Experimental results demonstrate the effectiveness of the proposed framework, underscoring its potential for real-world applications.
Live content is unavailable. Log in and register to view live content