Event-Based Motion Deblurring Using Task-Oriented 3D Gaussian Event Representations
Abstract
Event-based motion deblurring has attracted increasing attention as the high temporal resolution of event cameras provides motion cues unavailable to RGB sensors, enabling stronger deblurring. In real-world scenes, motion blur is often complex and nonlinear, with different regions exhibiting diverse speeds and directions. However, most existing approaches rely on handcrafted event representations that overlook such spatiotemporal motion heterogeneity, resulting in suboptimal deblurring performance. To address this issue, we design a learnable 3D Gaussian event representation module that adaptively selects key spatiotemporal coordinates beneficial for deblurring based on the distributions of the blurred image and event density, and integrates the event stream using a 3D Gaussian weighting kernel, thereby extracting local motion features sensitive to motion direction and velocity. In addition, to fully exploit the motion information aggregated in our event representation, a two-stage fusion strategy is employed. Local motion features are used in the first stage to enhance detail restoration, followed by a bidirectional attention fusion module that leverages the one-dimensional Gaussian-weighted event frames for global position correction, thereby achieving precise alignment of the overall structure. Extensive experiments on synthetic and real-world datasets validate the effectiveness of our approach and yield a substantial improvement over state-of-the-art methods.