ManifoldNeuS: Manifold-aware View Optimizability for Pose-Free Neural Surface Reconstruction
Abstract
Jointly optimizing camera poses and object geometry from unposed images is a challenging task in neural surface reconstruction. Existing methods often suffer from pose drift and geometric distortion, stemming from the easy-view bias --- uniform view optimization favors easy-to-optimize views with abundant texture and good overlap that dominate gradient updates, while hard-to-optimize counterparts with weak texture or limited overlap yet critical for geometric completeness are progressively marginalized. To address this, we propose ManifoldNeuS, a novel framework that explicitly models and leverages per-view optimizability to guide pose-free neural surface reconstruction. Specifically, we introduce the manifold-aware view optimizability score (MaVOS), which jointly assesses immediate fitness (the ease of optimizing each view) and long-term coverage gain (the value of optimizing each view) over the view-coherent manifold. Building on the MaVOS, we further devise a reconstruction pipeline that incorporates the per-view optimizability as a state control signal to guide the joint optimization process through three key components: dynamic view scheduling, gated positional encoding, and anti-score loss weighting. Experimental results on the benchmark dataset demonstrate that ManifoldNeuS outperforms existing methods in terms of accurate pose estimation and high-quality reconstruction, achieving robust joint optimization without known camera poses.