Poster
LITA-GS: Illumination-Agnostic Novel View Synthesis via Reference-Free 3D Gaussian Splatting and Physical Priors
Han Zhou · Wei Dong · Jun Chen
Directly employing 3D Gaussian Splatting (3DGS) on images with adverse illumination conditions exhibits considerable difficulty in achieving high-quality normally-exposed representation due to: (1) The limited Structure from Motion (SfM) points estimated in adverse illumination scenarios fail to capture sufficient scene details; (2) Without ground-truth references, the intensive information loss, huge noise, and color distortion poses substantial challenges for 3DGS to produce high-quality results; (3) Combining existing exposure correction methods with 3DGS can not achieve satisfactory performance due to their individual enhancement process, which leads to the illumination inconsistency between enhanced images from different viewpoints. To address these issues, we propose \textbf{LITA-GS}, a novel illumination-agnostic novel view synthesis method via reference-free 3DGS and physical priors. Firstly, we introduce an illumination-invariant physical prior extraction pipeline. Secondly, based on the extracted robust spatial structure prior, we develop the lighting-agnostic structure rendering strategy, which facilitates the optimization of the scene structure and object appearance. Moreover, a progressive denoising module is introduced to effectively surpass the noise within the light-invariant representation. We adopt the unsupervised strategy for the training of LITA-GS and extensive experiments demonstrate that LITA-GS surpass the state-of-the-art (SOTA) NeRF-based method by 1.7 dB in PSNR and 0.09 in SSIM while enjoying faster inference speed and costing reduced training time. The code will be released upon acceptance.
Live content is unavailable. Log in and register to view live content