Skip to yearly menu bar Skip to main content


Poster

NVComposer: Boosting Generative Novel View Synthesis with Multiple Sparse and Unposed Images

Lingen Li · Zhaoyang Zhang · Yaowei Li · Jiale Xu · Wenbo Hu · Xiaoyu Li · Weihao Cheng · Jinwei Gu · Tianfan Xue · Ying Shan


Abstract:

Recent advancements in generative models have significantly improved novel view synthesis (NVS) from multi-view data. However, existing methods still depend on external multi-view alignment processes, such as explicit pose estimation or pre-reconstruction, which limits their flexibility and accessibility, especially when alignment is unstable due to insufficient overlap or occlusions between views.In this paper, we propose NVComposer, a novel approach that eliminates the need for explicit external alignment. NVComposer enables the generative model to implicitly infer spatial and geometric relationships between multiple conditional views by introducing two key components: 1) an image-pose dual-stream diffusion model that simultaneously generates target novel views and condition camera poses, and 2) a geometry-aware feature alignment module that distills geometric priors from pretrained dense stereo models during training.Extensive experiments demonstrate that NVComposer achieves state-of-the-art performance in generative multi-view NVS tasks, removing the reliance on external alignment and thus improving model accessibility. Our approach shows substantial improvements in synthesis quality as the number of unposed input views increases, highlighting its potential for more flexible and accessible generative NVS systems.

Live content is unavailable. Log in and register to view live content