An Efficient Token Compression Framework for Visual Object Tracking
Abstract
Refining visual representations by eliminating their internal feature-level redundancy is crucial for simultaneously optimizing the performance and computational cost of models in visual tracking. To enhance their performance, many contemporary Transformer-based trackers leverage a larger number of historical template frames to capture richer spatio-temporal cues. However, this strategy leads to a massive number of input visual tokens. This creates two critical issues: it imposes a quadratic computational burden and can also degrade the tracker's overall performance. To bridge this gap, we propose a compress-then-interact tracking framework, ETCTrack, that learns to efficiently compress template tokens from historical template frames into a robust target representation, moving beyond handcrafted rules. Our method first employs the Adaptive Token Compressor to dynamically construct compact yet highly discriminative template tokens by filtering out redundant visual tokens. These refined tokens are then processed by our Hierarchical Interaction Encoder to achieve a deep, adaptive interaction with the search features. This fusion is performed through a cascade of collaborative stages, where each stage executes a structured process of template enrichment via search context, unified feature learning, and search feature refinement to ensure precise target localization. Experiments on seven benchmarks demonstrate that our method significantly outperforms current state-of-the-art trackers.