Poster
Trajectory-Mamba: An Efficient Attention-Mamba Forecasting Model Based on Selective SSM
Yizhou Huang · Yihua Cheng · Kezhi Wang
Motion prediction is crucial for autonomous driving systems, as it enables accurate forecasting of future vehicle trajectories based on historical motion data. This paper introduces Trajectory Mamba (Tamba), a novel efficient trajectory prediction framework based on the selective state-space (SSM) model. Conventional attention-based models face the challenge of computational costs that grow quadratically with the number of targets, hindering their application in highly dynamic environments. To address this, Tamba leverages the SSM module to redesign the self-attention mechanism in the encoder-decoder architecture, thereby achieving linear time complexity.To address the potential reduction in prediction accuracy resulting from modifications to the attention mechanism, we propose a joint polyline encoding strategy to better capture the associations between static and dynamic contexts, ultimately enhancing prediction accuracy. In addition, to achieve a better balance between prediction accuracy and inference speed, we adopted a structure in the decoder that differs entirely from the encoder. Through cross-state space attention, all target agents share the scene context, allowing the SSM to interact with the shared scene representation during decoding, thus inferring different trajectories over the next prediction steps.Our model achieves state-of-the-art (SOTA) results in terms of inference speed and parameter efficiency on both the Argoverse 1 and Argoverse 2 datasets. It demonstrates a fourfold reduction in FLOPs compared to existing methods and reduces parameter count by over 40\% while surpassing the performance of the vast majority of previous SOTA results. These findings validate the effectiveness of Trajectory Mamba in trajectory prediction tasks.
Live content is unavailable. Log in and register to view live content