L2DGS: Low-Light Dynamic Gaussian Splatting
Ashish Kumar ⋅ A. N. Rajagopalan
Abstract
Synthesizing novel spatiotemporal views of dynamic scenes is inherently challenging due to both object and camera motion, as well as sparsity of observations. Recent advances in Neural Radiance Fields (NeRFs) and Gaussian Splatting (GS) have enabled 4D dynamic scene reconstruction, but predominantly from well-lit images or videos. Some works address the problem of reconstructing a well-lit scene from low-light input, but these are limited to static scenes. Moreover, prior methods primarily emphasize improving illumination, while overlooking the underlying scene characteristics. Reconstructing well-lit dynamic scenes from inputs captured under low-light conditions is particularly challenging due to shadows, occlusions, and disocclusions caused by object motion, which makes the problem highly ambiguous and ill-posed. We propose $L^{2}DGS$ (Low-Light Dynamic Gaussian Splatting), a self-supervised 4D GS framework for directly reconstructing well-lit dynamic scenes from low-light inputs. The proposed method decomposes each scene into two complementary components: illumination, which varies across both view and time, and reflectance, which remains invariant to these factors. To achieve this, we introduce several key innovations. First, the proposed Occlusion-Disocclusion Network (OCD-Net) models time-varying intensity across frames. Next, we propose Brightness Attenuation Features (BAFs), when complemented by the BAF Enhancement Network (BAFE-Net), enable geometry- and photometry-aware transformation between well-lit and low-light scenes for self-supervision. Together, these components allow $L^{2}DGS$ to maximize signal strength and suppress noise inherent in low-light inputs, leading to enhanced spatial fidelity and temporal consistency under challenging illumination conditions. Our method operates on standard sRGB inputs without requiring camera metadata (e.g., exposure settings), ensuring compatibility with consumer-grade imaging devices. We evaluate $L^{2}DGS$ on both simulated and real-world Low-Light Dynamic Video ($L^{2}DyV$) datasets, demonstrating superior qualitative and quantitative performance.
Successful Page Load