Skip to yearly menu bar Skip to main content


MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation

Hanzhe Hu · Zhizhuo Zhou · Varun Jampani · Shubham Tulsiani

Arch 4A-E Poster #6
[ ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT


We present MVD-Fusion: a method for single-view 3D inference via generative modeling of multi-view-consistent RGB-D images. While recent methods pursuing 3D inference advocate learning novel-view generative models, these generations are not 3D-consistent and require a distillation process to generate a 3D output. We instead cast the task of 3D inference as that of directly generating mutually-consistent multiple views, and we build on the insight that additionally inferring depth can provide a mechanism for enforcing this consistency. Specifically, we train a denoising diffusion model to generate multi-view RGB-D images given a single RGB input image and leverage the (intermediate noisy) depth estimates to obtain reprojection-based conditioning for enabling multi-view consistency. We train our system using a large-scale synthetic dataset, and evaluate our approach on both, synthetic and real-world data. We demonstrate that our approach can yield more accurate synthesis compared to recent state-of-the-art, including distillation-based 3D inference and prior multi-view generation methods. We also evaluate the geometry induced by our multi-view depth prediction and find that it yields a more accurate representation than other direct 3D inference approaches.

Live content is unavailable. Log in and register to view live content