Skip to yearly menu bar Skip to main content


Poster

Scene-agnostic Pose Regression for Visual Localization

Junwei Zheng · Ruiping Liu · Yufan Chen · Zhenfang Chen · Kailun Yang · Jiaming Zhang · Rainer Stiefelhagen


Abstract:

Absolute Pose Regression (APR) predicts 6D camera poses but lacks the adaptability to unknown environments without retraining, while Relative Pose Regression (RPR) generalizes better yet requires a large image retrieval database. To address this dilemma, we introduce a new task, Scene-agnostic Pose Regression (SPR), which can achieve accurate pose regression in a flexible way while eliminating the need for retraining or databases. To benchmark SPR, we created a large-scale dataset, 360SPR, with over 200K photorealistic panoramas, 3.6M pinhole images and camera poses in 270 scenes at 3 different sensor heights. Furthermore, a SPR-Mamba model is initially proposed to address SPR in a dual-branch manner. While the local branch focuses on the poses between consecutive adjacent frames, the global branch is designed for the pose between the query and origin frame. Extensive experiments and studies demonstrate the effectiveness of our SPR task, dataset, and methods. In unknown 360SPR scenes, our method outperforms APR (27.45m/47.01°) and RPR (11.92m/21.27°), achieving a significant reduction of error to 3.85m/3.97°. The dataset and code will be made publicly available.

Live content is unavailable. Log in and register to view live content