Skip to yearly menu bar Skip to main content


HumMUSS: Human Motion Understanding using State Space Models

Arnab Mondal · Stefano Alletto · Denis Tome

Arch 4A-E Poster #207
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


Understanding human motion from video is essential for a range of applications, including pose estimation, mesh recovery and action recognition. While state-of-the-art methods predominantly rely on transformer-based architectures, these approaches have limitations in practical scenarios. Transformers are slower when sequentially predicting on a continuous stream of frames in real-time, and do not generalize to new frame rates. In light of these constraints, we propose a novel attention-free spatiotemporal model for human motion understanding building upon recent advancements in state space models.Our model not only matches the performance of transformer-based models in various motion understanding tasks but also brings added benefits like adaptability to different video frame rates and enhanced training speed when working with longer sequence of keypoints. Moreover, the proposed model supports both offline and real-time applications. For real-time sequential prediction, our model is both memory efficient and several times faster than transformer-based approaches while maintaining their high accuracy.

Live content is unavailable. Log in and register to view live content