LNEM: Lunar Neural Elevation Model
Abstract
High-resolution and high-precision digital elevation models (DEMs) of the lunar surface are essential for landing site selection and lunar geological research. However, traditional stereo matching techniques provide limited representation of 3D scene, struggling with non-textured regions and extreme light variations. Furthermore, recent lunar neural rendering methods are ill-suited for 3D reconstruction due to their reliance on simple pinhole approximations for pushbroom sensors. These challenges are compounded by inconsistencies introduced during satellite image processing, including geometric misalignment, distributional bias, and labor-intensive handcrafted operations. To address these issues, we introduce the Lunar Neural Elevation Model (LNEM), a volumetric reconstruction method that explicitly incorporates the pushbroom imaging model. A core component of our approach is the Lunar Studio dataset, processed using Rigorous Sensor Models (RSMs) to ensure geometric consistency of multi-orbit Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Korea Pathfinder Lunar Orbiter (KPLO) Lunar Terrain Imager (LUTI) images. LNEM integrates this RSM-based pushbroom camera formulation with learned shadow modeling, enabling physically grounded volumetric rendering under challenging lunar illumination. Extensive experiments demonstrate that LNEM achieves geometrically consistent reconstruction and cross-sensor generalization under diverse viewing and lighting conditions, providing a scalable and physically informed alternative to conventional DEM pipelines. To facilitate reproducibility and future lunar research, we release Lunar Studio, its multi-orbit dataset, and the LNEM reconstruction pipeline.