Models as Lego Builders: Assembling Malice from Benign Blocks via Semantic Blueprints
Abstract
Despite the rapid progress of Large Vision-Language Models (LVLMs), the integration of visual modalities introduces new safety vulnerabilities that adversaries can exploit to elicit biased or malicious outputs. In this paper, we demonstrate an underexplored vulnerability via semantic slot filling, where LVLMs complete missing slot values with unsafe content even when the slot types are deliberately crafted to appear benign. Building on this finding, we propose \ours, a simple yet effective single-query jailbreak framework under black-box settings. \ours decomposes a harmful query into a central topic and a set of benign-looking slot types, then embeds them as structured visual prompts (e.g., mind maps, tables, or sunburst diagrams) with small random perturbations. Paired with a completion-guided instruction, LVLMs automatically recompose the concealed semantics and generate unsafe outputs without triggering safety mechanisms. Although each slot appears benign in isolation (local benignness), \ours exploits LVLMs’ reasoning to assemble these slots into coherent harmful semantics. Extensive experiments on multiple models across two widely used benchmarks demonstrate the effectiveness of our proposed \ours.