Skip to yearly menu bar Skip to main content


Poster

MIMO: A medical vision language model with visual referring multimodal input and pixel grounding multimodal output

Yanyuan Chen · Dexuan Xu · Yu Huang · ZhanSongkun · Hanpin Wang · Dongxue Chen · Xueping Wang · Meikang Qiu · Hang Li


Abstract:

Currently, medical vision language models are widely used in medical vision question answering tasks. However, existing models are confronted with two issues: for input, the model only relies on text instructions and lacks direct understanding of visual clues in the image; for output, the model only gives text answers and lacks connection with key areas in the image. To address these issues, we propose a unified medical vision language model MIMO, with visual referring Multimodal Input and pixel grounding Multimodal Output. MIMO can not only combine visual clues and textual instructions to understand complex medical images and semantics, but can also ground medical terminologies in textual output within the image. To overcome the scarcity of relevant data in the medical field, we propose MIMOSeg, a comprehensive medical multimodal dataset including 895K samples. MIMOSeg is constructed from four different perspectives, covering basic instruction following and complex question answering with multimodal input and multimodal output. We conduct experiments on several downstream medical multimodal tasks. Extensive experimental results verify that MIMO can uniquely combine visual referring and pixel grounding capabilities, which are not available in previous models.

Live content is unavailable. Log in and register to view live content