Skip to yearly menu bar Skip to main content


Poster

Bundle Sampling: Revisiting Plenoptic Sampling Theory for Efficient Generalizable Neural Radiance Field

Li Fang · Hao Zhu · Longlong Chen · Fei Hu · Long Ye · Zhan Ma


Abstract: Recent advancements in generalizable novel view synthesis have achieved impressive quality through interpolation between nearby views. However, rendering high-resolution images remains computationally intensive due to the need for dense sampling of all rays. Observing the piecewise smooth nature of natural scenes, we find that sampling all rays is redundant for novel view synthesis. Inspired by plenoptic sampling theory, we propose a bundle sampling strategy. By grouping adjacent rays into a bundle and sampling them collectively, a shared representation is generated for decoding all rays within the bundle. For regions with high-frequency content, such as edges and depth discontinuities, more samples along depth are used to capture finer details. To further optimize efficiency, we introduce a depth-guided adaptive sampling strategy, which dynamically allocates samples based on depth confidence—concentrating more samples in complex regions and reducing them in smoother areas. This dual approach significantly accelerates rendering. Applied to ENeRF, our method achieves up to a 1.27 dB PSNR improvement and a 47% increase in FPS on the DTU dataset. Extensive experiments on synthetic and real-world datasets demonstrate state-of-the-art rendering quality and up to 2× faster rendering compared to existing generalizable methods. Code and trained models will be released upon acceptance.

Live content is unavailable. Log in and register to view live content