Skip to yearly menu bar Skip to main content


Poster

Skinned Motion Retargeting With Residual Perception of Motion Semantics & Geometry

Jiaxu Zhang · Junwu Weng · Di Kang · Fang Zhao · Shaoli Huang · Xuefei Zhe · Linchao Bao · Ying Shan · Jue Wang · Zhigang Tu

West Building Exhibit Halls ABC 144

Abstract:

A good motion retargeting cannot be reached without reasonable consideration of source-target differences on both the skeleton and shape geometry levels. In this work, we propose a novel Residual RETargeting network (R2ET) structure, which relies on two neural modification modules, to adjust the source motions to fit the target skeletons and shapes progressively. In particular, a skeleton-aware module is introduced to preserve the source motion semantics. A shape-aware module is designed to perceive the geometries of target characters to reduce interpenetration and contact-missing. Driven by our explored distance-based losses that explicitly model the motion semantics and geometry, these two modules can learn residual motion modifications on the source motion to generate plausible retargeted motion in a single inference without post-processing. To balance these two modifications, we further present a balancing gate to conduct linear interpolation between them. Extensive experiments on the public dataset Mixamo demonstrate that our R2ET achieves the state-of-the-art performance, and provides a good balance between the preservation of motion semantics as well as the attenuation of interpenetration and contact-missing. Code is available at https://github.com/Kebii/R2ET.

Chat is not available.