Skip to yearly menu bar Skip to main content


Poster

Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

Seungtae Nam · Xiangyu Sun · Gyeongjin Kang · Younggeun Lee · Seungjun Oh · Eunbyung Park


Abstract:

Generalized feed-forward Gaussian models have shown remarkable progress in sparse-view 3D reconstruction, leveraging prior knowledge learned from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of generated Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be extended and applied to the feed-forward models, it may not be ideally suited for generalized settings. In this paper, we present Generative Densification, an efficient and generalizable densification strategy that can selectively generate fine Gaussians for high-fidelity 3D reconstruction. Unlike the 3D-GS densification strategy, we densify the feature representations from the feed-forward models rather than the raw Gaussians, making use of the prior knowledge embedded in the features for enhanced generalization. Experimental results demonstrate the effectiveness of our approach, achieving the state-of-the-art rendering quality in both object-level and scene-level reconstruction, with noticeable improvements in representing fine details.

Live content is unavailable. Log in and register to view live content