Skip to yearly menu bar Skip to main content


Poster

Unveiling Visual Perception in Language Models: A Attention Head Analysis Approach

Jing Bi · Lianggong Bruce Wen · Zhang Liu · JunJia Guo · Yunlong Tang · Bingjie Wang · Chenliang Xu


Abstract:

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated remarkable progress in visual understanding. This impressive leap raises a compelling question: how can language models, initially trained solely on linguistic data, effectively interpret and process visual content? This paper aims to address this question with systematic investigation across 4 model families and 4 model scales, uncovering a unique class of attention heads that focus specifically on visual content. Our analysis reveals a strong correlation between the behavior of these attention heads, the distribution of attention weights, and their concentration on visual tokens within the input. These findings enhance our understanding of how LLMs adapt to multimodal tasks, demonstrating their potential to bridge the gap between textual and visual understanding. This work paves the way for the development of AI systems capable of engaging with diverse modalities.

Live content is unavailable. Log in and register to view live content