Efficient Real-Time Raw-to-Raw Denoising for Extreme Low-Light Ultra HD Video on Mobile Devices
Charantej Reddy Pochimireddy ⋅ Subhasmita Sahoo ⋅ Apoorva Verma ⋅ Palavalli Shyam ⋅ Swapnil Malviya ⋅ Sarvesh Sarvesh ⋅ Raj Narayana Gadde
Abstract
Recent advancements in deep neural networks (DNNs) have significantly improved visual quality of camera captures under low-light ($<$10lx) conditions, yet visual quality in extreme low-light ($<$1lx) remains inadequate. Existing DNN models are computationally intensive and suffer from large processing times, making them impractical for real-time enhancement of high-resolution video. Consequently, Ultra HD (UHD) videos (4K/8K) captured in extreme low-light environments exhibit elevated noise and diminished detail. Developing DNN-based solutions for UHD video enhancement faces challenges including paired dataset creation, temporal consistency, and efficient deployment under strict latency ($<$33ms) and power constraints ($<$250mA for 30fps video).We present a \textit{comprehensive methodology} for developing a real-time raw to raw denoising solution for UHD video in extreme low-light, designed for seamless integration into existing ISP pipelines. Unlike ISP-replacement approaches, our solution enhances commercial camera stacks across sensor platforms. Our framework comprises: (1) Diverse dataset creation methodology; (2) A low-complexity model architecture optimized for mobile compute elements; (3) Efficient training and post-training optimizations (reparameterization, restructuring, quantization) to meet latency constraints while ensuring high-quality output. The result is a power-efficient real-time raw to raw video denoiser that improves extreme low-light video quality while preserving downstream ISP behavior.
Successful Page Load