Learning Latent Proxies for Controllable Single-Image Relighting
Abstract
Single-image relighting is highly under-constrained: small illumination changes can produce large, nonlinear variations in shading, shadows, and specularities, while geometry and materials remain unobserved. Existing diffusion-based approaches either rely on intrinsic- or G-buffer–based pipelines that require dense and fragile supervision, or operate purely in latent space without physical grounding, making fine-grained control of direction, intensity, and color unreliable. We observe that full intrinsic decomposition is unnecessary for accurate relighting. Instead, sparse but physically meaningful cues—indicating where illumination should change and how materials should respond—are sufficient to guide a diffusion model. Based on this insight, we introduce LightCtrl that integrates minimal physical priors at two levels: a few-shot latent proxy encoder that extracts compact material–geometry cues from limited PBR supervision, and a lighting-aware mask that identifies illumination-sensitive regions and steers the denoiser toward shading-relevant pixels. To compensate for scarce PBR data, we refine the proxy branch using a DPO-based objective that aligns predicted cues with perceptually preferred relighting behavior. We further present ScaLight, a large-scale object-level dataset with systematically varied illumination and complete camera–light metadata, enabling physically consistent and controllable training. Across object- and scene-level benchmarks, our method achieves photometrically faithful relighting with accurate continuous control, surpassing prior diffusion- and intrinsic-based baselines, including gains of up to +2.4 dB PSNR and 35% lower RMSE under controlled lighting shifts.