Skip to yearly menu bar Skip to main content


Poster

Co-Speech Gesture Video Generation with Implicit Motion-Audio Entanglement

Xinjie Li · Ziyi Chen · Xinlu Yu · Iek-Heng Chu · Peng Chang · Jing Xiao


Abstract:

Co-speech gestures are essential to non-verbal communication, enhancing both the naturalness and effectiveness of human interaction. Although recent methods have made progress in generating co-speech gesture videos, many rely on strong visual controls, such as pose images or TPS keypoint movements, which often lead to artifacts like blurry hands and distorted fingers. In response to these challenges, we present the Implicit Motion-Audio Entanglement (IMAE) method for co-speech gesture video generation. IMAE strengthens audio control by entangling implicit motion parameters, including pose and expression, with audio inputs. Our method utilizes a two-branch framework that combines an audio-to-motion generation branch with a video diffusion branch, enabling realistic gesture generation without requiring additional inputs during inference. To improve training efficiency, we propose a two-stage slow-fast training strategy that balances memory constraints while facilitating the learning of meaningful gestures from long frame sequences.Extensive experimental results demonstrate that our method achieves state-of-the-art performance across multiple metrics.

Live content is unavailable. Log in and register to view live content