Skip to yearly menu bar Skip to main content


OmniGlue: Generalizable Feature Matching with Foundation Model Guidance

Hanwen Jiang · Arjun Karpur · Bingyi Cao · Qixing Huang · AndrĂ© Araujo

Arch 4A-E Poster #32
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract: The image matching field has been witnessing a continuous emergence of novel learnable feature matching techniques, with ever-improving performance on conventional benchmarks. However, our investigation shows that despite these gains, their potential for real-world applications is restricted by their limited generalization capabilities to novel image domains. In this paper, we introduce OmniGlue, the first learnable image matcher that is designed with generalization as a core principle. OmniGlue leverages broad knowledge from a vision foundation model to guide the feature matching process, boosting generalization to domains not seen at training time. Additionally, we propose a novel keypoint position-guided attention mechanism which disentangles spatial and appearance information, leading to enhanced matching descriptors. We perform comprehensive experiments on a suite of $6$ datasets with varied image domains, including scene-level, object-centric and aerial images. OmniGlue's novel components lead to relative gains on unseen domains of 18.8% with respect to a directly comparable reference model, while also outperforming the recent LightGlue method by 10.1% relatively. Code and model will be released.

Live content is unavailable. Log in and register to view live content