Skip to yearly menu bar Skip to main content


Poster

EdgeDiff: Edge-aware Diffusion Network for Building Reconstruction from Point Clouds

Yujun Liu · Ruisheng Wang · Shangfeng Huang · GuoRong Cai


Abstract:

Building reconstruction is a challenging problem at the intersection of computer vision, photogrammetry and computer graphics. 3D wireframe presents a compelling representation for building modeling through its compact structure. Existing wireframe reconstruction methods employing vertex detection and edge regression have achieved promising results. In this paper, we develop an \textbf{Edge}-aware \textbf{Diff}usion network, dubbed \textbf{EdgeDiff}. As a novel paradigm for wireframe reconstruction, the EdgeDiff generates wireframe models from noise using a conditional diffusion model. During the training process, the ground truth wireframes firstly are formulated as a set of parameterized edges and then diffused into a random noise distribution. EdgeDiff learns both the noise reversal process and the network structure simultaneously. During inference, EdgeDiff iteratively refines the generated edge distribution using the denoising diffusion implicit model, enabling flexible single- or multi-step denoising and dynamic adaptation to buildings of varying complexity. Additionally, given the unique structure of wireframes, we introduce an edge attention module to extract point-wise attention from point features, using it as auxiliary information to facilitate learning of edge cues and guide the network toward improved edge awareness. To the best of our knowledge, EdgeDiff is the first to pioneer the use of a diffusion model in building wireframe reconstruction. Extensive experiments on the real-world Building3D dataset demonstrate that our approach achieves state-of-the-art performance.

Live content is unavailable. Log in and register to view live content