Skip to yearly menu bar Skip to main content


Robust Image Denoising through Adversarial Frequency Mixup

Donghun Ryou · Inju Ha · Hyewon Yoo · Dongwan Kim · Bohyung Han

Arch 4A-E Poster #250
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


Image denoising approaches based on deep neural networks often struggle with overfitting to specific noise distributions present in training data. This challenge persists in existing real-world denoising networks, which are trained using a limited spectrum of real noise distributions, and thus, show poor robustness to out-of-distribution real noise types. To alleviate this issue, we develop a novel training framework called Adversarial Frequency Mixup (AFM). AFM leverages mixup in the frequency domain to generate noisy images with distinctive and challenging noise characteristics, all the while preserving the properties of authentic real-world noise. Subsequently, incorporating these noisy images into the training pipeline enhances the denoising network's robustness to variations in noise distributions. Extensive experiments and analyses, conducted on a wide range of real noise benchmarks demonstrate that denoising networks trained with our proposed framework exhibit significant improvements in robustness to unseen noise distributions. Code is available at

Live content is unavailable. Log in and register to view live content