Skip to yearly menu bar Skip to main content


Revisiting Global Translation Estimation with Feature Tracks

Peilin Tao · Hainan Cui · Mengqi Rong · Shuhan Shen

Arch 4A-E Poster #110
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Global translation estimation is a highly challenging step in the global structure from motion (SfM) algorithm.Many existing methods depend solely on relative translations, leading to inaccuracies in low parallax scenes and degradation under collinear camera motion.While recent approaches aim to address these issues by incorporating feature tracks into objective functions, they are often sensitive to outliers.In this paper, we first revisit global translation estimation methods with feature tracks and categorize them into explicit and implicit methods.Then, we highlight the superiority of the objective function based on the cross-product distance metric and propose a novel explicit global translation estimation framework that integrates both relative translations and feature tracks as input.To enhance the accuracy of input observations, we re-estimate relative translations with the coplanarity constraint of the epipolar plane and propose a simple yet effective strategy to select reliable feature tracks.Finally, the effectiveness of our approach is demonstrated through experiments on urban image sequences and unordered Internet images, showcasing its superior accuracy and robustness compared to many state-of-the-art techniques.

Live content is unavailable. Log in and register to view live content