MMSD3.0: A Multi-Image Benchmark for Real-World Multimodal Sarcasm Detection
Abstract
Despite progress in multimodal sarcasm detection, existing datasets and methods predominantly focus on single-image scenarios, overlooking potential semantic and affective relations across multiple images. This leaves a gap in modeling cases where sarcasm is triggered by multi-image cues in real-world settings. To bridge this gap, we introduce MMSD3.0, a new benchmark composed entirely of multi-image samples curated from tweets and Amazon reviews. We further propose a Cross-Image Reasoning Model (CIRM), integrating a Dual-Stage Bridge Module and Relevance-Guided Fusion Module to model inter-image dependencies and cross-modal correspondences. Complementarily, we establish a comprehensive suite of strong and representative baselines and conduct extensive experiments, showing that MMSD3.0 is an effective and reliable benchmark that better reflects real-world conditions. Moreover, CIRM demonstrates state-of-the-art performance across MMSD, MMSD2.0, and MMSD3.0, validating its effectiveness in both single-image and multi-image scenarios.