Skip to yearly menu bar Skip to main content


In-Context Matting

He Guo · Zixuan Ye · Zhiguo Cao · Hao Lu

Arch 4A-E Poster #343
award Highlight
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract: We introduce in-context matting, a novel task setting of image matting. Given a reference image of a certain foreground and guided priors such as points, scribbles, and masks, in-context matting enables automatic alpha estimation on a batch of target images of the same foreground category, without additional auxiliary input. This setting marries good performance in auxiliary input-based matting and ease of use in automatic matting, which finds a good trade-off between customization and automation. To overcome the key challenge of accurate foreground matching, we introduce IconMatting, an in-context matting model built upon a pre-trained text-to-image diffusion model. Conditioned on inter- and intra-similarity matching, IconMatting can make full use of reference context to generate accurate target alpha mattes. To benchmark the task, we also introduce a novel testing dataset ICM-$57$, covering $57$ groups of real-world images. Quantitative and qualitative results on the ICM-$57$ testing set show that IconMatting rivals the accuracy of trimap-based matting while retaining the automation level akin to automatic matting. Code is available at

Live content is unavailable. Log in and register to view live content