Poster
LAL: Enhancing 3D Human Motion Prediction with Latency-aware Auxiliary Learning
Xiaoning Sun · Dong Wei · Huaijiang Sun · Shengxiang Hu
Making accurate prediction of human motions based on the historical observation is a crucial technology for robots to collaborate with humans. Existing human motion prediction methods are all built under an ideal assumption that robots can instantaneously react, which ignores the time delay introduced during data processing & analysis and future reaction planning -- jointly known as "response latency''. Consequently, the predictions made within this latency period become meaningless for practical use, as part of the time has passed and the corresponding real motions have already occurred before robot deliver its reaction. In this paper, we argue that the seemingly meaningless prediction period, however, can be leveraged to enhance prediction accuracy significantly. We propose LAL, a Latency-aware Auxiliary Learning framework, which shifts the existing "reaction instantaneous'' convention into a new motion prediction paradigm with both latency compatibility and utility. The framework consists of two branches handling different tasks: the primary branch learns to directly predict the valid target (excluding the beginning latency period) based on observation; while the auxiliary branch learns the same target, but based on the reformed observation with additional latency data incorporated. A direct and effective way of auxiliary feature sharing is forced by our tailored consistency loss, to gradually integrate auxiliary latency insights into the primary prediction branch. Estimated feature statistics-based alignment method is presented as optional step for primary branch refinement. Experiments show that LAL achieves significant improvement on prediction accuracy, without additional time consumption during testing.
Live content is unavailable. Log in and register to view live content