Skip to yearly menu bar Skip to main content


Poster

Structure-Aware Sparse-View X-ray 3D Reconstruction

Yuanhao Cai · Jiahao Wang · Alan L. Yuille · Zongwei Zhou · Angtian Wang


Abstract: X-ray, known for its ability to reveal internal structures of objects, is expected to provide richer information for 3D reconstruction than visible light. Yet, existing neural radiance fields (NeRF) algorithms overlook this important nature of X-ray, leading to their limitations in capturing structural contents of imaged objects. In this paper, we propose a framework, Structure-Aware X-ray Neural Radiodensity Fields (SAX-NeRF), for sparse-view X-ray 3D reconstruction. Firstly, we design a Line Segment-based Transformer (Lineformer) as the backbone of SAX-NeRF. Linefomer captures internal structures of objects in 3D space by modeling the dependencies within each line segment of an X-ray. Secondly, we present a Masked Local-Global (MLG) ray sampling strategy to extract contextual and geometric information in 2D projection. Plus, we collect a larger-scale dataset X3D covering wider X-ray applications. Experiments on X3D show that SAX-NeRF surpasses previous NeRF-based methods by $\bf 12.56$ and $\bf 2.49~dB$ on novel view synthesis and CT reconstruction. Code, models, and data will be released.

Live content is unavailable. Log in and register to view live content