Skip to yearly menu bar Skip to main content


Poster

Generative Sparse-View Gaussian Splatting

Hanyang Kong · Xingyi Yang · Xinchao Wang


Abstract:

Novel view synthesis from limited observations remains a significant challenge due to the lack of information in under-sampled regions, often resulting in noticeable artifacts. We introduce Generative Sparse-view Gaussian Splatting (GS-GS), a general pipeline designed to enhance the rendering quality of 3D/4D Gaussian Splatting (GS) when training views are sparse. Our method generates unseen views using generative models, specifically leveraging pre-trained image diffusion models to iteratively refine view consistency and hallucinate additional images at pseudo views. This approach improves 3D/4D scene reconstruction by explicitly enforcing semantic correspondences during the generation of unseen views, thereby enhancing geometric consistency—unlike purely generative methods that often fail to maintain view consistency. Extensive evaluations on various 3D/4D datasets—including Blender, LLFF, Mip-NeRF360, and Neural 3D Video—demonstrate that our GS-GS outperforms existing state-of-the-art methods in rendering quality without sacrificing efficiency.

Live content is unavailable. Log in and register to view live content