Skip to yearly menu bar Skip to main content


Poster

ADD: A General Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution

Zeyu Mi · Yu-Bin Yang


Abstract: Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Map (CAM) to generate saliency masks, followed by two saliency-based DA methods—ADD and ADD+—designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we highlight two key insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we propose two guiding principles for designing saliency-based DA: coarse-grained partitioning and diverse augmentation strategies. Extensive experiments demonstrate the compatibility and consistent, significant performance improvement of our method across various SR tasks and networks.

Live content is unavailable. Log in and register to view live content