Skip to yearly menu bar Skip to main content


ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion

Jiayu Yang · Ziang Cheng · Yunfei Duan · Pan Ji · Hongdong Li

Arch 4A-E Poster #222
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


Given a single image of a 3D object, this paper proposes a novel method (named ConsistNet) that is able to generate multiple images of the same object, as if seen they are captured from different viewpoints, while the 3D (multi-view) consistencies among those multiple generated images are effectively exploited. Central to our method is a light-weight multi-view consistency block which enables information exchange across multiple single-view diffusion processes based on the underlying multi-view geometry principles. ConsistNet is an extension to the standard latent diffusion model, and consists of two sub-modules: (a) a view aggregation module that unprojects multi-view features into global 3D volumes and infer consistency, and (b) a ray aggregation module that samples and aggregate 3D consistent features back to each view to enforce consistency. Our approach departs from previous methods in multi-view image generation, in that it can be easily dropped-in pre-trained LDMs without requiring explicit pixel correspondences or depth prediction. Experiments show that our method effectively learns 3D consistency over a frozen Zero123-XL backbone and can generate 16 surrounding views of the object within 11 seconds on a single A100 GPU.

Live content is unavailable. Log in and register to view live content