Skip to yearly menu bar Skip to main content


Poster

Adaptive Markup Language Generation for Contextually-Grounded Visual Document Understanding

Han Xiao · yina xie · Guanxin tan · Yinghao Chen · Rui Hu · Ke Wang · Aojun Zhou · Hao Li · Hao Shao · Xudong LU · Peng Gao · Yafei Wen · Xiaoxin Chen · Shuai Ren · Hongsheng Li


Abstract:

Visual Document Understanding has become essential with the increase of text-rich visual content. This field poses significant challenges due to the need for effective integration of visual perception and textual comprehension, particularly across diverse document types with complex layouts. Moreover, existing fine-tuning datasets for this domain often fall short in providing the detailed contextual information for robust understanding, leading to hallucinations and limited comprehension of spatial relationships among visual elements. To address these challenges, we propose an innovative pipeline that utilizes adaptive generation of markup languages, such as Markdown, JSON, HTML, and TiKZ, to build highly structured document representations and deliver contextually-grounded responses. We introduce two fine-grained structured datasets: DocMark-Pile, comprising approximately 3.8M pretraining data pairs for document parsing, and DocMark-Instruct, featuring 624k fine-tuning data annotations for grounded instruction following.Extensive experiments demonstrate that our proposed model significantly outperforms existing state-of-the-art MLLMs across a range of visual document understanding benchmarks, facilitating advanced reasoning and comprehension capabilities in complex visual scenarios.

Live content is unavailable. Log in and register to view live content