Skip to yearly menu bar Skip to main content


Slice3D: Multi-Slice Occlusion-Revealing Single View 3D Reconstruction

Yizhi Wang · Wallace Lira · Wenqi Wang · Ali Mahdavi Amiri · Hao Zhang

Arch 4A-E Poster #23
[ ] [ Project Page ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT


We introduce multi-slice reasoning, a new notion for single-view 3D reconstruction which challenges the current and prevailing belief that multi-view synthesis is the most natural conduit between single-view and 3D. Our key observation is that object slicing is more advantageous than altering views to reveal occluded structures. Specifically, slicing is more occlusion-revealing since it can peel through any occluders without obstruction. In the limit, i.e., with infinitely many slices, it is guaranteed to unveil all hidden object parts. We realize our idea by developing Slice3D, a novel method for single-view 3D reconstruction which first predicts multi-slice images from a single RGB image and then integrates the slices into a 3D model using a coordinate-based transformer network for signed distance prediction. The slice images can be regressed or generated, both through a U-Net based network. For the former, we inject a learnable slice indicator code to designate each decoded image into a spatial slice location, while the slice generator is a denoising diffusion model operating on the entirety of slice images stacked on the input channels. We conduct extensive evaluation against state-of-the-art alternatives to demonstrate superiority of our method, especially in recovering complex and severely occluded shape structures, amid ambiguities. All Slicer3D results were produced by networks trained on a single Nvidia A40 GPU, with an inference time less than 20 seconds.

Live content is unavailable. Log in and register to view live content