Skip to yearly menu bar Skip to main content


TransLoc4D: Transformer-based 4D Radar Place Recognition

Guohao Peng · Heshan Li · Yangyang Zhao · Jun Zhang · Zhenyu Wu · Pengyu Zheng · Danwei Wang

Arch 4A-E Poster #290
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT


Place recognition is crucial for unmanned vehicles in terms of localization and mapping. Recent years have witnessed numerous explorations in the field, where 2D cameras and 3D LiDARs are mostly employed. Despite their admirable performance, they may encounter challenges in adverse weather such as rain and fog. Hopefully, 4D millimeter-wave Radar emerges as a promising alternative, as its longer wavelength makes it virtually immune to interference from tiny particles of fog and rain. Therefore, in this work, we propose a novel 4D Radar place recognition model, TransLoc4D, based on sparse convolution and Transformer structures. Specifically, a MinkLoc4D backbone is first proposed to leverage the geometric, intensity, and velocity information from 4D Radar scans. While mainstream 3D LiDAR solutions merely capture geometric structures of point clouds, MinkLoc4D explores the intensity and velocity properties of 4D Radar scans and demonstrates their effectiveness. After feature extraction, a Transformer layer is introduced to enhance local features, where linear self-attention captures the long-range dependency of point cloud, alleviating its sparsity and noise. To validate TransLoc4D, we construct two datasets and set up benchmarks for 4D place recognition. Experiments show TransLoc4D is feasible and can robustly deal with dynamic and adverse environments.

Live content is unavailable. Log in and register to view live content