Skip to yearly menu bar Skip to main content


Poster

Neural Fields Meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes

Zian Wang · Tianchang Shen · Jun Gao · Shengyu Huang · Jacob Munkberg · Jon Hasselgren · Zan Gojcic · Wenzheng Chen · Sanja Fidler

West Building Exhibit Halls ABC 014

Abstract:

Reconstruction and intrinsic decomposition of scenes from captured imagery would enable many applications such as relighting and virtual object insertion. Recent NeRF based methods achieve impressive fidelity of 3D reconstruction, but bake the lighting and shadows into the radiance field, while mesh-based methods that facilitate intrinsic decomposition through differentiable rendering have not yet scaled to the complexity and scale of outdoor scenes. We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth. Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows. By faithfully disentangling complex geometry and materials from lighting effects, our method enables photorealistic relighting with specular and shadow effects on several outdoor datasets. Moreover, it supports physics-based scene manipulations such as virtual object insertion with ray-traced shadow casting.

Chat is not available.