DocPrune: Efficient Document Question Answering via Background, Question, and Comprehension-aware Token Pruning
Abstract
Recent advances in vision–language models have shown strong performance across diverse multimodal tasks, including document question answering that leverages structured visual cues from text, tables, and figures. However, unlike natural images, document images contain large backgrounds and only sparse supporting evidence, leading to the waste of substantial computational resources, especially for long documents. We observe that existing token reduction methods for natural images and videos fall short in utilizing the structural sparsity unique to documents. To address this, we propose DOCPRUNE, a training-free document token pruning framework designed for efficient long document understanding. The proposed method preserves only the essential tokens for the task while removing unnecessary ones, such as background or question-irrelevant tokens. Moreover, it automatically selects the appropriate layers to initiate token pruning based on the model’s level of comprehension. Our experiments on the M3DocRAG benchmark show that DOCPRUNE improves throughput by 3.0× and 3.3× in the encoder and decoder, respectively, while boosting the F1 score by +1.0, achieving both higher accuracy and efficiency without any additional training.