Skip to yearly menu bar Skip to main content


SynSP: Synergy of Smoothness and Precision in Pose Sequences Refinement

Tao Wang · Lei Jin · Zheng Wang · Jianshu Li · Liang Li · Fang Zhao · Yu Cheng · Li Yuan · Li ZHOU · Junliang Xing · Jian Zhao

Arch 4A-E Poster #160
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


Predicting human pose sequences via existing pose estimators often encounters various estimation errors. Motion refinement methods aim to optimize the predicted human pose sequences from pose estimators while ensuring minimal computational overhead and latency. Prior investigations have primarily concentrated on striking a balance between the two objectives, i.e., smoothness and precision, while optimizing the predicted pose sequences. However, it has come to our attention that the tension between these two objectives can provide additional quality cues about the predicted pose sequences. These cues, in turn, are able to aid the network in optimizing lower-quality poses. To leverage this quality information, we propose a motion refinement network, termed SynSP, to achieve a Synergy of Smoothness and Precision in the sequence refinement tasks. Moreover, SynSP can also address multi-view poses of one person simultaneously, fixing inaccuracies in predicted poses through heightened attention to similar poses from other views, thereby amplifying the resultant quality cues and overall performance. Compared with previous methods, SynSP benefits from both pose quality and multi-view information with a much shorter input sequence length, achieving state-of-the-art results among four challenging datasets involving 2D, 3D, and SMPL pose representations in both single-view and multi-view scenes. We will release our source codes, pretrained models, and online demos to facilitate further research.

Live content is unavailable. Log in and register to view live content