SSM-Aware Token-Efficient VMamba via Adaptive Patch Pruning and Merging for Person Re-Identification
Abstract
Person re-identification (Re-ID) requires a balance between discriminative capability and computational efficiency for real-world deployment. However, even the Visual State Space Model (SSM), despite its linear complexity, suffers from redundant computation due to dense token processing. We propose SSM-aware Token-Efficient VMamba (TE-VMamba), which integrates adaptive patch pruning and merging modules to reduce redundant tokens while preserving identity-discriminative cues. The layer-adaptive pruning strategy removes low-importance tokens in shallow layers to enhance efficiency, whereas the depth-aware merging strategy consolidates semantically similar tokens in deeper layers to improve representation compactness. Learnable layer-wise thresholds dynamically balance accuracy and computational cost across the network. On the Market-1501 benchmark, TE-VMamba reduces FLOPs by over 60\%, achieving significant computational savings while maintaining competitive accuracy. These results highlight the potential of structured token reduction in state-space models for efficient and powerful person re-identification.