Scone: Bridging Composition and Distinction in Subject-Driven Image Generation via Unified Understanding-Generation Modeling
Abstract
Subject-driven image generation has advanced from single- to multi-subject composition, while neglecting distinction, the ability to identify and generate the correct subject when inputs contain multiple candidates. This limitation restricts effectiveness in realistic and complex visual settings. We propose Scone, a unified understanding-generation framework that integrates composition and distinction. Scone enables the understanding expert to act as a semantic bridge that conveys semantic information and guides the generation expert to preserve subject identity while reducing inference. A two-stage training scheme first learns composition and then strengthens distinction through semantic alignment and attention-based masking. We also introduce SconeEval, a benchmark designed to evaluate composition, distinction, and their combination across diverse scenarios. Experiments show that Scone outperforms existing open-source models in both composition and distinction tasks. Our model, benchmark, and training data will be open-sourced.