Skip to yearly menu bar Skip to main content


Workshop

ScanNet++ Novel View Synthesis and 3D Semantic Understanding Challenge

Angela Dai · Yueh-Cheng Liu · Chandan Yeshwanth · Ben Mildenhall · Peter Kontschieder · Matthias Nießner

211

Thu 12 Jun, 6:50 a.m. PDT

Keywords:  3D Scene Understanding  

Recent advances in generative modeling and semantic understanding have spurred significant interest in synthesis and understanding of 3D scenes. In 3D, there is significant potential in application areas, for instance augmented and virtual reality, computational photography, interior design, and autonomous mobile robots all require a deep understanding of 3D scene spaces. The ScanNet++ workshop offers the first benchmark challenge for novel view synthesis in large-scale 3D scenes, along with high-fidelity, large-vocabulary 3D semantic scene understanding -- where very complete, high-fidelity ground truth scene data is available. This is enabled through the new ScanNet++ dataset, which offers 1mm resolution laser scan geometry, high-quality DSLR image capture, and dense semantic annotations over 1000 class categories. In particular, existing view synthesis leverages data captured from a single continuous trajectory, where evaluation of novel views outside of the original trajectory capture is impossible. In contrast, our novel view synthesis challenge leverages test images captured intentionally outside of the train image trajectory, allowing for comprehensive evaluation of methods to test new, challenging scenarios for state-of-the-art methods.

Live content is unavailable. Log in and register to view live content