Skip to yearly menu bar Skip to main content


Poster

VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge

Vishwesh Nath · Wenqi Li · Dong Yang · Andriy Myronenko · Yao Lu · Zhijian Liu · Danny Yin · Yucheng Tang · Pengfei Guo · Ziyue Xu · Can Zhao · Yufan He · Greg Heinrich · Mingxin Zheng · Benjamin D. Simon · Stephanie Anne Harmon · Michael Zephyr · Marc Edgar · Stephen R. Aylward · Pavlo Molchanov · Yan Mee LAW · Baris Turkbey · Holger R. Roth · Daguang Xu


Abstract: Generalist vision language models (VLMs) have made significant strides in computer vision, but they fall short in specialized fields like healthcare, where expert knowledge is essential. Current large multimodal models like Gemini and GPT-4o are insufficient for medical tasks due to their reliance on memorized internet knowledge rather than the nuanced expertise required in healthcare. Meanwhile, existing medical VLMs (e.g. Med-Gemini) often lack expert consultation as part of their design, and many rely on outdated, static datasets that were not created with modern, large deep learning models in mind. VLMs are usually trained in three stages: vision pre-training, vision-language pre-training, and instruction fine-tuning (IFT). IFT has been typically applied using a mixture of generic and healthcare data. In contrast, we propose that for medical VLMs, a fourth stage of specialized IFT is necessary, which focuses on medical data and includes information from domain expert models. Domain expert models developed for medical use are crucial because they are specifically trained for certain clinical tasks, e.g. to detect tumors and classify abnormalities through segmentation and classification, which learn fine-grained features of medical datafeatures that are often too intricate for a VLM to capture effectively. This paper introduces a new framework, VILA-M3, for medical VLMs that utilizes domain knowledge via expert models. We argue that generic VLM architectures alone are not viable for real-world clinical applications and on-demand usage of domain-specialized expert model knowledge is critical for advancing AI in healthcare. Through our experiments, we show an improved state-of-the-art (SOTA) performance with an average improvement of 9\% over the prior SOTA model Med-Gemini and 6\% over models trained on the specific tasks. Our approach emphasizes the importance of domain expertise in creating precise, reliable VLMs for medical applications.

Live content is unavailable. Log in and register to view live content