Skip to yearly menu bar Skip to main content


Poster

Reconstructing Humans with a Biomechanically Accurate Skeleton

Yan Xia · Xiaowei Zhou · Etienne Vouga · Qixing Huang · Georgios Pavlakos


Abstract:

In this paper, we introduce a method for reconstructing humans in 3D from a single image using a biomechanically accurate skeleton model. To achieve this, we train a transformer that takes an image as input and estimates the parameters of the model. Due to the lack of training data for this task, we build a pipeline to generate pseudo ground truth data and implement a training procedure that iteratively refines these pseudo labels for improved accuracy. Compared to state-of-the-art methods in 3D human pose estimation, our model achieves competitive performance on standard benchmarks, while it significantly outperforms them in settings with extreme 3D poses and viewpoints. This result highlights the benefits of using a biomechanical skeleton with realistic degrees of freedom for robust pose estimation. Additionally, we show that previous models frequently violate joint angle limits, leading to unnatural rotations. In contrast, our approach leverages the biomechanically plausible degrees of freedom leading to more realistic joint rotation estimates. We validate our approach across multiple human pose estimation benchmarks. We will make all code, models and data publicly available upon publication.

Live content is unavailable. Log in and register to view live content