Poster
Diffusion-based Event Generation for High-Quality Image Deblurring
Xinan Xie · Qing Zhang · Wei-Shi Zheng
While event-based deblurring have demonstrated impressive results, they are actually impractical for consumer photos captured by cell phones and digital cameras that are not equipped with the event sensor. To address this issue, we in this paper propose a novel deblurring framework called the Event Generation Deblurring (EGDeblurring), which allows to effectively deblur an image by generating event guidance describing the motion information using a diffusion model. Specifically, we design a Motion Prior Generation Diffusion Model (MPG-Diff) and a Feature Extractor (FE) to produce prior information beneficial for the deblurring task, rather than generating the raw event representation.In order to achieve effective fusion of motion prior information with blurry images and produce high-quality results, we propose a Regression Deblurring network (RDNet) embedded with a Dual-Branch Fusion Block (DBFB) incorporated with a multi-branch cross-modal attention mechanism.Experiments on multiple datasets demonstrate that our method outperforms the state-of-the-art methods.
Live content is unavailable. Log in and register to view live content