Recover to Predict: Progressive Retrospective Learning for Variable-Length Trajectory Prediction
Hao Zhou ⋅ Lu Qi ⋅ Xiangtai Li ⋅ Jie Zhang ⋅ Yi Liu ⋅ Xu Yang ⋅ Mingyu Fan ⋅ Fei Luo
Abstract
Trajectory prediction is critical for autonomous driving, enabling safe and efficient planning in dense, dynamic traffic. Most existing methods optimize prediction accuracy under fixed-length observations. However, real-world driving often yields variable-length, incomplete observations, posing a challenge to these methods. A common strategy is to directly map features from incomplete observations to those from complete ones. This one-shot mapping, however, struggles to learn accurate representations for short trajectories due to significant information gaps. To address this issue, we propose a $\textbf{P}$rogressive $\textbf{R}$etrospective $\textbf{F}$ramework (PRF), which gradually aligns features from incomplete observations with those from complete ones via a cascade of retrospective units. Each unit consists of a Retrospective Distillation Module (RDM) and a Retrospective Prediction Module (RPM), where RDM distills features and RPM recovers previous timesteps using the distilled features. Moreover, we propose a Rolling-Start Training Strategy (RSTS) that enhances data efficiency during PRF training. PRF is plug-and-play with existing methods. Extensive experiments on datasets Argoverse 2 and Argoverse 1 demonstrate the effectiveness of PRF. Code will be released.
Successful Page Load