Skip to yearly menu bar Skip to main content


Poster

SocialGesture: Delving into Multi-person Gesture Understanding

Xu Cao · Pranav Virupaksha · Wenqi Jia · Bolin Lai · Fiona Ryan · Sangmin Lee · James Rehg


Abstract:

Previous research in human gesture recognition has largely overlooked multi-person interactions, which are crucial for understanding the social context of naturally occurring gestures. This limitation in existing datasets presents a significant challenge in aligning human gestures with other modalities like language and speech. To address this issue, we introduce SocialGesture, the first large-scale dataset specifically designed for multi-person gesture analysis. SocialGesture features a diverse range of natural scenarios and supports multiple gesture analysis tasks, including video-based recognition and temporal localization, providing a valuable resource for advancing the study of gesture during complex social interactions. Furthermore, we propose a novel visual question answering (VQA) task to benchmark vision language models' (VLMs) performance on social gesture understanding. Our findings highlight several limitations of current gesture recognition models, offering insights into future directions for improvement in this field.

Live content is unavailable. Log in and register to view live content