Skip to yearly menu bar Skip to main content


Co-Speech Gesture Synthesis by Reinforcement Learning With Contrastive Pre-Trained Rewards

Mingyang Sun · Mengchen Zhao · Yaqing Hou · Minglei Li · Huang Xu · Songcen Xu · Jianye Hao

West Building Exhibit Halls ABC 223


There is a growing demand of automatically synthesizing co-speech gestures for virtual characters. However, it remains a challenge due to the complex relationship between input speeches and target gestures. Most existing works focus on predicting the next gesture that fits the data best, however, such methods are myopic and lack the ability to plan for future gestures. In this paper, we propose a novel reinforcement learning (RL) framework called RACER to generate sequences of gestures that maximize the overall satisfactory. RACER employs a vector quantized variational autoencoder to learn compact representations of gestures and a GPT-based policy architecture to generate coherent sequence of gestures autoregressively. In particular, we propose a contrastive pre-training approach to calculate the rewards, which integrates contextual information into action evaluation and successfully captures the complex relationships between multi-modal speech-gesture data. Experimental results show that our method significantly outperforms existing baselines in terms of both objective metrics and subjective human judgements. Demos can be found at

Chat is not available.