Skip to yearly menu bar Skip to main content


Poster

FASTer: Focal token Acquiring-and-Scaling Transformer for Long-term 3D Objection Detection.

Chenxu Dang · Pei An · Xinmin Zhang · ZaiPeng Duan · Xuzhong Hu · Jie Ma


Abstract:

Recent top-performing temporal 3D detectors based on Lidars have increasingly adopted region-based paradigms. They first generate coarse proposals, followed by encoding and fusing regional features. However, indiscriminate sampling and fusion often overlook the varying contributions of individual points and lead to exponentially increased complexity as the number of input frames grows. Moreover, simple result-level concatenation limits the global information extraction. In this paper, we propose a Focal Token Acquring-and-Scaling Transformer (FASTer), which dynamically selects focal tokens and condenses token sequences in a lightweight manner. Emphasizing the contribution of individual tokens, we propose a simple but effective Adaptive Scaling mechanism to capture geometric contexts while sifting out focal points. Adaptively storing and processing only focal points in historical frames dramatically reduce the overall complexity, resulting in more compact and information-dense temporal sequences. Furthermore, an innovative grouped hierarchical fusion strategy is proposed, progressively performing sequence scaling and intra-group fusion operations to facilitate the exchange of global spatial and temporal information. Experiments on the Waymo Open Dataset demonstrate that our FASTer significantly outperforms other state-of-the-art detectors in both performance and efficiency while also exhibiting improved flexibility and robustness. The code is available at https://github.com/.

Live content is unavailable. Log in and register to view live content