F^2HDR: Two-Stage HDR Video Reconstruction via Flow Adapter and Physical Motion Modeling
Huanjing Yue ⋅ Dawei Li ⋅ Shaoxiong Tu ⋅ Jingyu Yang
Abstract
Reconstructing High Dynamic Range (HDR) videos from sequences of alternating-exposure Low Dynamic Range (LDR) frames remains highly challenging, especially under dynamic scenes where cross-exposure inconsistencies and complex motion make inter-frame alignment difficult, leading to ghosting and detail loss. Existing methods often suffer from inaccurate alignment, suboptimal feature aggregation, and degraded reconstruction quality in motion-dominated regions. To address these challenges, we propose $\text{F}^2\text{HDR}$, a two-stage HDR video reconstruction framework that robustly perceives inter-frame motion and restores fine details in complex dynamic scenarios. The proposed framework integrates a flow adapter that adapts generic optical flow for robust cross-exposure alignment, a motion mask that identifies salient motion regions to guide ghosting and noise suppression, and a motion-aware refinement network that aggregates complementary information for coherent detail reconstruction. Extensive experiments demonstrate that $\text{F}^2\text{HDR}$ achieves state-of-the-art performance on real-world HDR video benchmarks, producing ghost-free and high-fidelity results under large motion and exposure variations. Code will be released upon acceptance.
Successful Page Load