Skip to yearly menu bar Skip to main content


Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion

Kiran Chhatre · Radek Danecek · Nikos Athanasiou · Giorgio Becherini · Christopher Peters · Michael J. Black · Timo Bolkart

Arch 4A-E Poster #172
[ ] [ Project Page ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


Existing methods for synthesizing 3D human gestures from speech have shown promising results, but they do not explicitly model the impact of emotions on the generated gestures. Instead, these methods directly output animations from speech without control over the expressed emotion. To address this shortcoming, we present AMUSE, an emotional speech-driven body animation model based on latent diffusion. Our observation is that content (i.e., gestures related to speech rhythm and word utterances), emotion, and personal style are separable. To account for this, AMUSE maps the driving audio to three disentangled latent vectors, one for content, one for emotion and one for personal style. A latent diffusion model, trained to generate gesture motion sequences, is then conditioned on these latent vectors. Once trained, AMUSE synthesizes 3D human gestures directly from speech with control over the emotions and style by combining the content from the driving speech, with the emotion and style of another speech sequence. Randomly sampling the noise of the diffusion model further generates variations of the gesture with the same emotion. Qualitative, quantitative, and perceptual evaluations demonstrate that AMUSE outputs realistic gesture sequences. Compared to the state-of-the-art, the generated gestures are better synchronized with the speech content, and better represent the emotion expressed by the input speech. Code and model will be released for research purposes.

Live content is unavailable. Log in and register to view live content