Skip to yearly menu bar Skip to main content


LocLLM: Exploiting Generalizable Human Keypoint Localization via Large Language Model

Dongkai Wang · shiyu xuan · Shiliang Zhang

Arch 4A-E Poster #43
award Highlight
[ ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


The capacity of existing human keypoint localization models is limited by keypoint priors provided by the training data. To alleviate this restriction and pursue more general model, this work studies keypoint localization from a different perspective by reasoning locations based on keypiont clues in text descriptions. We propose LocLLM, the first Large-Language Model (LLM) based keypoint localization model that takes images and text instructions as inputs and outputs the desired keypoint coordinates.LocLLM leverages the strong reasoning capability of LLM and clues of keypoint type, location, and relationship in textual descriptions for keypoint localization. To effectively tune LocLLM, we construct localization-based instruction conversations to connect keypoint description with corresponding coordinates in input image, and fine-tune the whole model in a parameter-efficient training pipeline. LocLLM shows remarkable performance on standard 2D/3D keypoint localization benchmarks. Moreover, incorporating language clues into the localization makes LocLLM show superior flexibility and generalizable capability in cross dataset keypoint localization, and even detecting novel type of keypoints unseen during training. We will release the model and code for further research and evaluation.

Live content is unavailable. Log in and register to view live content