Skip to yearly menu bar Skip to main content


Poster

Learning Hazing to Dehazing: Towards Realistic Haze Generation for Real-World Image Dehazing

Ruiyi Wang · Yushuo Zheng · Zicheng Zhang · Chunyi Li · Shuaicheng Liu · Guangtao Zhai · Xiaohong Liu


Abstract:

Existing real-world image dehazing methods typically attempt to fine-tune pre-trained models or adapt their inference procedures, placing significant reliance on the quality of pre-training data. Although generative diffusion models have shown potential in restoring heavily distorted information, their application in dehazing remains constrained due to extensive sampling steps and fidelity limitations. To address these challenges, we propose a two-stage hazing-dehazing pipeline, which integrates the Realistic Haze Generation Framework (HazeGen) and the Diffusion-based Dehazing Framework (DiffDehaze). Specifically, HazeGen takes advantage of the rich generative diffusion prior of real-world hazy images embedded in the pre-trained text-to-image diffusion model and leverages IRControlNet to realize conditional generation. To further improve haze authenticity and generation diversity, HazeGen utilizes the hybrid training and the blended sampling approaches to generate high-quality training data for DiffDehaze. In order to leverage generative capacity while retaining efficiency, DiffDehaze employs the Accelerated Fidelity-Preserving Sampling Strategy (AccSamp). With a Patch-based Statistical Alignment Operation (AlignOp), DiffDehaze can quickly generate a faithful dehazing estimate in few sampling steps, which can be used to reduce sampling steps and enables a haze density-aware fidelity guidance. Extensive visual comparisons and quantitative evaluations demonstrate the superior dehazing performance and visual quality of our approach over existing methods. The code will be made publicly available.

Live content is unavailable. Log in and register to view live content