Skip to yearly menu bar Skip to main content


Poster

Dual Focus-Attention Transformer for Robust Point Cloud Registration

Kexue Fu · Ming'zhi Yuan · Changwei Wang · Weiguang Pang · Jing Chi · Manning Wang · Longxiang Gao


Abstract:

Recently, coarse-to-fine methods for point cloud registration have achieved great success, but few works deeply explore the impact of feature interaction at both coarse and fine scales. By visualizing attention scores and correspondences, we find that existing methods fail to achieve effective feature aggregation at the two scales during the feature interaction. To tackle this issue, we propose a Dual Focus-Attention Transformer framework, which only focuses on points relevant to the current point for feature interaction, avoiding interactions with irrelevant points. For the coarse scale, we design a superpoint focus-attention transformer guided by sparse keypoints, which are selected from the neighborhood of superpoints. For the fine scale, we only perform feature interaction between the point sets that belong to the same superpoint. Experiments show that our method achieve the state-of-the-art performance on three standard benchmarks. The code and pre-trained models will be available at Github.

Live content is unavailable. Log in and register to view live content