Skip to yearly menu bar Skip to main content


Poster

FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching

Zimin Xia ยท Alex Alahi


Abstract:

We propose a novel fine-grained cross-view localization method that estimates the 3 Degrees of Freedom pose of a ground-level image in an aerial image of the surroundings by matching fine-grained features between the two images. The pose is estimated by aligning a point plane generated from the ground image with a point plane sampled from the aerial image. To generate the ground points, we first map ground image features to a 3D point cloud. Our method then learns to select features along the height dimension to pool the 3D points to a Bird's-Eye-View (BEV) plane. This selection enables us to trace which feature in the ground image contributes to the BEV representation. Next, we sample a set of sparse matches from computed point correspondences between the two point planes and compute their relative pose using Procrustes alignment. Compared to the previous state-of-the-art, our method reduces the mean localization error by 42% on the VIGOR dataset. Qualitative results show that our method learns semantically consistent matches across ground and aerial views through weakly supervised learning from the ground truth camera pose.

Live content is unavailable. Log in and register to view live content