Skip to yearly menu bar Skip to main content


LLMs are Good Sign Language Translators

Jia Gong · Lin Geng Foo · Yixuan He · Hossein Rahmani · Jun Liu

Arch 4A-E Poster #363
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT


Sign Language Translation (SLT) is a challenging task that aims to translate sign videos into spoken language. Inspired by the strong translation capabilities of large language models (LLMs) that are trained on extensive web-scale multilingual text corpora, we aim to harness off-the-shelf LLMs to handle SLT. In this paper, we regularize the sign videos to embody linguistic characteristics of spoken language, and propose a novel SignLLM framework to transform sign videos into a language-like representation for improved readability by off-the-shelf LLMs. SignLLM comprises two key modules: (1) The Vector-Quantized Visual Sign module converts sign videos into a sequence of discrete character-level sign tokens, and (2) the Codebook Reconstruction and Alignment module converts these character-level tokens into word-level sign representations using an optimal transport formulation. A sign-text alignment loss further bridges the gap between sign and text tokens, enhancing semantic compatibility. We achieve state-of-the-art results on two widely-used SLT benchmarks.

Live content is unavailable. Log in and register to view live content