Poster
MP-GUI: Modality Perception with MLLMs for GUI Understanding
Ziwei Wang · Weizhi Chen · Leyang Yang · Sheng Zhou · Shengchu Zhao · Hanbei Zhan · Jiongchao Jin · Liangcheng Li · Zirui Shao · Jiajun Bu
Graphical user interface (GUI) has become integral to modern society, making it crucial to be understood for human-centric systems. The rapid development of multi-modal large language models (MLLMs) in recent years has revealed their significant potential in GUI understanding. However, unlike natural images or documents, GUIs comprise artificially designed graphical elements arranged to convey specific semantic meanings. Current MLLMs already proficient in processing graphical and textual components suffer from hurdles in GUI understanding due to the lack of explicit spatial structure modeling. Moreover, obtaining high-quality spatial structure data is challenging due to privacy issues and noisy environments. To tackle these challenges, this paper presents MP-GUI, a specially designed MLLM for GUI understanding. MP-GUI features three precisely specialized perceivers to extract graphical, textual, and spatial modality from GUIs, with spatial structure enhancing strategy and adaptively combined via a fusion gate to meet the distinct requirements of different GUI interpretation tasks. To cope with the scarcity of high-quality data, we also introduce a pipeline for automatically collecting spatial information. Our extensive experiments demonstrate that MP-GUI achieves impressive results on numerous GUI understanding tasks even with a limited amount of generated data.
Live content is unavailable. Log in and register to view live content