Skip to yearly menu bar Skip to main content


ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe

Yifan Bai · Zeyang Zhao · Yihong Gong · Xing Wei

Arch 4A-E Poster #429
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: We present ARTrackV2, which integrates two pivotal aspects of tracking: determining where to look (localization) and how to describe (appearance analysis) the target object across video frames. Building on the foundation of its predecessor, ARTrackV2 extends the concept by introducing a unified generative framework to "read out" object's trajectory and "retell" its appearance in an autoregressive manner. This approach fosters a time-continuous methodology that models the joint evolution of motion and visual features, guided by previous estimates. Furthermore, ARTrackV2 stands out for its efficiency and simplicity, obviating the less efficient intra-frame autoregression and hand-tuned parameters for appearance updates. Despite its simplicity, ARTrackV2 achieves state-of-the-art performance on prevailing benchmark datasets while demonstrating remarkable efficiency improvement. In particular, ARTrackV2 achieves AO score of 79.5\% on GOT-10k, and AUC of 86.1\% on TrackingNet while being $3.6 \times$ faster than ARTrack. The code will be released.

Live content is unavailable. Log in and register to view live content