Skip to yearly menu bar Skip to main content


Poster

SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces

Sumit Chaturvedi · Mengwei Ren · Yannick Hold-Geoffroy · Jingyuan Liu · Julie Dorsey · ZHIXIN SHU


Abstract:

We introduce SynthLight, a diffusion model for portrait relighting. Our approach frames image relighting as a re-rendering problem, where pixels must be transformed in response to changes in environmental lighting conditions. We synthesize a dataset to simulate this lighting-conditioned transformation with 3D head assets under varying lighting using a physically-based rendering engine. We propose two training and inference strategies to bridge the gap between the synthetic and real image domains: (1) multi-task training that takes advantage of real human portraits without lighting labels; (2) an inference time diffusion sampling procedure based on classifier-free guidance that leverages the input portrait to better preserve details. Our method generalizes to diverse real photographs, produces realistic illumination effects such as specular highlights and cast shadows, while preserving identity well. Our quantitative experiments on Light Stage testing data demonstrate results comparable to state-of-the-art relighting methods trained with Light Stage data. Our qualitative results on in-the-wild images show high quality illumination effects for portraits that have never been achieved before with traditional supervision.

Live content is unavailable. Log in and register to view live content