Poster
Distraction is All You Need for Multimodal Large Language Model Jailbreaking
Zuopeng Yang · Jiluan Fan · Anli Yan · Erdun Gao · Xin Lin · Tao Li · Kanghua Mo · Changyu Dong
[
Abstract
]
Abstract:
Multimodal Large Language Models (MLLMs) bridge the gap between visual and textual data, enabling a range of advanced applications. However, complex internal interactions among visual elements and their alignment with text can introduce vulnerabilities, which may be exploited to bypass safety mechanisms. To address this, we analyze the relationship between image content and task and find that the complexity of subimages, rather than their content, is key. Building on this insight, we propose the , followed by a novel framework called Contrasting Subimage Distraction Jailbreaking (), to achieve jailbreaking by disrupting MLLMs alignment through multi-level distraction strategies. CS-DJ consists of two components: structured distraction, achieved through query decomposition that induces a distributional shift by fragmenting harmful prompts into sub-queries, and visual-enhanced distraction, realized by constructing contrasting subimages to disrupt the interactions among visual elements within the model. This dual strategy disperses the model’s attention, reducing its ability to detect and mitigate harmful content. Extensive experiments across five representative scenarios and four popular closed-source MLLMs, including , , , and , demonstrate that CS-DJ achieves average success rates of for the attack success rate and for the ensemble attack success rate. These results reveal the potential of distraction-based approaches to exploit and bypass MLLMs' defenses, offering new insights for attack strategies.
Live content is unavailable. Log in and register to view live content