RawMetaDiff: Unlocking Extreme Darkness from Dual-Exposure RAW with Meta-Guided Diffusion
Abstract
Extreme low-light Raw image restoration remains challenging due to overwhelming noise and severe detail loss.In this paper, we exploit the potential of the dual-exposure setting for this severely ill-posed problem.Existing methods suffer from unreliable cross-exposure alignment, resulting in degraded detail recovery and compromised color fidelity. To address these challenges, we propose RawMetaDiff, a novel generative diffusion framework that restores a high-fidelity Raw image from a short-exposure input, conditioned on a potentially misaligned long-exposure reference under the guidance of Raw metadata.At its core, we propsed two complementary mechanisms: the Meta-Assistant Color Transfer (MACT) enforces color consistency by aligning global color statistics along the channel dimension,while the Meta-Normed Cross Attention (MNCA) leverages Raw metadata to establish robust cross-exposure spatial correspondences and inject shadow details.To support robust diffusion training, we first collect a 1K real-world, dual-exposure Raw dataset, namely DERaw, and then design a realistic degradation model to synthesize data that closely approximates real-world conditions.Extensive experiments on both synthetic and real-world datasets demonstrate that RawMetaDiff significantly outperforms existing methods, justifying an effective new solution for extreme low-light Raw image restoration from the generative perspective.