Skip to yearly menu bar Skip to main content


Poster

Comprehensive and Delicate: An Efficient Transformer for Image Restoration

Haiyu Zhao · Yuanbiao Gou · Boyun Li · Dezhong Peng · Jiancheng Lv · Xi Peng

West Building Exhibit Halls ABC 169

Abstract:

Vision Transformers have shown promising performance in image restoration, which usually conduct window- or channel-based attention to avoid intensive computations. Although the promising performance has been achieved, they go against the biggest success factor of Transformers to a certain extent by capturing the local instead of global dependency among pixels. In this paper, we propose a novel efficient image restoration Transformer that first captures the superpixel-wise global dependency, and then transfers it into each pixel. Such a coarse-to-fine paradigm is implemented through two neural blocks, i.e., condensed attention neural block (CA) and dual adaptive neural block (DA). In brief, CA employs feature aggregation, attention computation, and feature recovery to efficiently capture the global dependency at the superpixel level. To embrace the pixel-wise global dependency, DA takes a novel dual-way structure to adaptively encapsulate the globality from superpixels into pixels. Thanks to the two neural blocks, our method achieves comparable performance while taking only ~6% FLOPs compared with SwinIR.

Chat is not available.