From Events to Clarity: The Event-Guided Diffusion Framework for Dehazing
Ling Wang ⋅ Yunfan Lu ⋅ Wenzong Ma ⋅ Huizai Yao ⋅ Pengteng Li ⋅ Hui Xiong
Abstract
Clear imaging under hazy conditions is a critical task. Prior-based and neural methods have improved results.However, they operate on RGB frames, which suffer from limited dynamic range.Therefore, dehazing remains ill-posed and can erase structure and illumination details.To address this, we use event cameras for dehazing for the \textbf{first time}.Event cameras offer much higher HDR ($120 dB~vs.~60 dB$) and microsecond latency, therefore they suit hazy scenes.In practice, transferring HDR cues from events to frames is hard because real paired data are scarce.To tackle this, we propose an event-guided diffusion model that utilizes the strong generative priors of diffusion models to reconstruct clear images from hazy inputs by effectively transferring HDR information from events.Specifically, we design an event-guided module that maps sparse HDR event features, \textit{e.g.,} edges, corners, into the diffusion latent space.This clear conditioning provides precise structural guidance during generation, improves visual realism, and reduces semantic drift.For real-world evaluation, we collect a drone dataset in heavy haze (AQI = 341) with synchronized RGB and event sensors. Experiments on two benchmarks and our dataset achieve state-of-the-art results.
Successful Page Load