Skip to yearly menu bar Skip to main content


Addressing Background Context Bias in Few-Shot Segmentation through Iterative Modulation

Lanyun Zhu · Tianrun Chen · Jianxiong Yin · Simon See · Jun Liu

Arch 4A-E Poster #311
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract: Existing few-shot segmentation methods usually extract foreground prototypes from support images to guide query image segmentation. However, different background contexts of support and query images can cause their foreground features to be misaligned. This phenomenon, known as background context bias, can hinder the effectiveness of support prototypes in guiding query image segmentation. In this work, we propose a novel framework with an iterative structure to address this problem. In each iteration of the framework, we first generate a query prediction based on a support foreground feature. Next, we extract background context from the query image to modulate the support foreground feature, thus eliminating the foreground feature misalignment caused by the different backgrounds. After that, we design a confidence-biased attention to eliminate noise and cleanse information. By integrating these components through an iterative structure, we create a novel network that can leverage the synergies between different modules to improve their performance in a mutually reinforcing manner. Through these carefully designed components and structures, our network can effectively eliminate background context bias in few-shot segmentation, thus achieving outstanding performance. We conduct extensive experiments on the PASCAL-$5^{i}$ and COCO-$20^{i}$ datasets and achieve state-of-the-art (SOTA) results, which demonstrate the effectiveness of our approach.

Live content is unavailable. Log in and register to view live content