Skip to yearly menu bar Skip to main content


Poster

EFormer: Enhanced Transformer towards Semantic-Contour Features of Foreground for Portraits Matting

Zitao Wang · Qiguang Miao · Yue Xi · Peipei Zhao


Abstract:

The portrait matting task aims to extract an alpha matte with complete semantics and finely detailed contours. In comparison to CNN-based approaches, transformers with self-attention module have a better capacity to capture long-range dependencies and low-frequency semantic information of a portrait.However, recent research shows that the self-attention mechanism struggles with modeling high-frequency contour information and capturing fine contour details, which can lead to bias while predicting the portrait's contours. To deal with this issue, we propose EFormer to enhance the model's attention towards both the low-frequency semantic and high-frequency contour features. For the high-frequency contours, our research demonstrates that cross-attention module between different resolutions can guide our model to allocate attention appropriately to these contour regions.Supported by this, we can successfully extract the high-frequency detail information around the portrait's contours, which were previously ignored by self-attention.Based on the cross-attention module, we further build a semantic and contour detector (SCD) to accurately capture both the low-frequency semantic and high-frequency contour features.And we design a contour-edge extraction branch and semantic extraction branch to extract refined high-frequency contour features and complete low-frequency semantic information, respectively.Finally, we fuse the two kinds of features and leverage the segmentation head to generate a predicted portrait matte. Experiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM) datasets demonstrate that EFormer outperforms previous portrait matte methods.

Live content is unavailable. Log in and register to view live content