Skip to yearly menu bar Skip to main content


Poster

MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research

James Burgess · Jeffrey J Nirschl · Laura Bravo-Sánchez · Alejandro Lozano · Sanket Rajan Gupte · Jesus G. Galaz-Montoya · Yuhui Zhang · Yuchang Su · Disha Bhowmik · Zachary Coman · Sarina M. Hasan · Alexandra Johannesson · William D. Leineweber · Malvika G Nair · Ridhi Yarlagadda · Connor Zuraski · Wah Chiu · Sarah Cohen · Jan N. Hansen · Manuel D Leonetti · Chad Liu · Emma Lundberg · Serena Yeung


Abstract:

Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks target up to college level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment proposal. MicroVQA consists of 1,061 multiple-choice questions (MCQs) curated by biological experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. We find that standard MCQ creation methods do not properly test our targeted reasoning capabilities, motivating a new two stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based `RefineBot' generates more challenging distractors. Benchmarking on state-of-the-art MLLMs reveal a peak performance of 43%; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought reasoning failures indicates that multimodal reasoning errors are frequent, followed by knowledge errors and overgeneralization. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research.

Live content is unavailable. Log in and register to view live content