Skip to yearly menu bar Skip to main content


Poster

DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation

Bo-Wen Yin · Jiao-Long Cao · Ming-Ming Cheng · Qibin Hou


Abstract:

Recent advances in scene understanding benefit a lot from depth maps because of the 3D geometry information, especially in complex conditions (e.g., low light and overexposed). Existing approaches encode depth maps along with RGB images and perform feature fusion between them to enable more robust predictions. Taking into account that depth can be regarded as a geometry supplement for RGB images, a straightforward question arises: Do we really need to explicitly encode depth information with neural networks as done for RGB images? Based on this insight, in this paper, we investigate a new way to learn RGBD feature representations and present DFormerv2, a strong RGBD encoder that explicitly uses depth maps as geometry priors rather than encoding depth information with neural networks. Our goal is to leverage a memory token as the query to extract the geometry clues from the depth and spatial distances among all the image patch tokens, which will then be used as geometry priors to allocate attention weights in self-attention. Extensive experiments demonstrate that \nameofmethod{} exhibits exceptional performance in various RGBD semantic segmentation benchmarks.

Live content is unavailable. Log in and register to view live content