OMoBlur: An Object Motion Blur Dataset and Benchmark for Real-World Local Motion Deblurring
Abstract
Object motion blur in static scenes is spatially heterogeneous, differing from conventional deblurring problems yet frequently occurring in real handheld capture scenarios. Existing datasets either rely on costly beam-splitting capture with residual misalignment or employ synthetic blur that fails to model the continuous photon-integration process during exposure. To overcome these limitations, we introduce OMoBlur, a physically grounded dataset that emulates realistic exposure integration via programmable sensor control, ensuring close alignment between synthetic and real blur distributions. OMoBlur provides 20,000 blur–sharp–mask pairs covering diverse object motion types. Leveraging this dataset, we further propose OMDNet, an object-motion-aware deblurring network that integrates a Motion–Appearance Extract Block, a Flow-Guided Gate Predictor, and an Adaptive Gated Fusion mechanism. This design enables the network to selectively restore blurred regions while preserving static backgrounds, without requiring pixel-accurate mask annotations. Extensive experiments demonstrate that OMoBlur’s physically faithful data collection and large-scale diversity significantly enhance the network’s generalization to real-world motion blur, establishing OMoBlur and OMDNet as a robust benchmark and practical solution for local motion deblurring. The dataset and code will be publicly released.