Skip to yearly menu bar Skip to main content


Poster

LLaMo: Scaling Pretrained Language Models for Unified Motion Understanding and Generation with Continuous Autoregressive Tokens

Zekun Li ⋅ Sizhe An ⋅ Chengcheng Tang ⋅ Chuan Guo ⋅ Ivan Shugurov ⋅ Linguang Zhang ⋅ Amy Zhao ⋅ Srinath Sridhar ⋅ Lingling Tao ⋅ Abhay Mittal

Abstract

Log in and register to view live content