Skip to yearly menu bar Skip to main content


Learning to Produce Semi-dense Correspondences for Visual Localization

Khang Truong Giang · Soohwan Song · Sungho Jo

Arch 4A-E Poster #208
[ ] [ Project Page ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT
Oral presentation: Orals 5B 3D from multiview and sensors
Fri 21 Jun 9 a.m. PDT — 10:30 a.m. PDT


This study addresses the challenge of performing visual localization in demanding conditions such as night-time scenarios, adverse weather, and seasonal changes. While many prior studies have focused on improving image matching performance to facilitate reliable dense keypoint matching between images, existing methods often heavily rely on predefined feature points on a reconstructed 3D model. Consequently, they tend to overlook unobserved keypoints during the matching process. Therefore, dense keypoint matches are not fully exploited, leading to a notable reduction in accuracy, particularly in noisy scenes. To tackle this issue, we propose a novel localization method that extracts reliable semi-dense 2D-3D matching points based on dense keypoint matches. This approach involves regressing semi-dense 2D keypoints into 3D scene coordinates using a point inference network. The network utilizes both geometric and visual cues to effectively infer 3D coordinates for unobserved keypoints from the observed ones. The abundance of matching information significantly enhances the accuracy of camera pose estimation, even in scenarios involving noisy or sparse 3D models. Comprehensive evaluations demonstrate that the proposed method outperforms other methods in challenging scenes and achieves competitive results in large-scale visual localization benchmarks. The code will be available at

Live content is unavailable. Log in and register to view live content