Poster
CoMatcher: Multi-View Collaborative Feature Matching
Jintao Zhang · Zimin Xia · Mingyue Dong · Shuhan Shen · Linwei Yue · Xianwei Zheng
This paper proposes a multi-view collaborative matching strategy to address the issue of sparse and broken tracks in Structure-from-Motion. We observe that the two-view matching paradigms applied to image set matching often lead to unreliable correspondences when the selected independent image pairs exhibit low connection, high occlusions and drastic viewpoint changes. This is due to the significant loss of information during 3D-to-2D projection and two-view images can only provide a very limited perception of the holistic 3D scene. Accordingly, we propose a multi-view collaborative matching network (CoMatcher) that (i) leverages complementary context cues from different views to form a holistic understanding of the 3D scene and (ii) utilizes multi-view consistency constraints to infer a globally optimal solution. Extensive experiments on various complicated scenarios demonstrates the superiority of our multi-view collaborative matching strategy over the mainstream two-view matching paradigm.
Live content is unavailable. Log in and register to view live content