Guiding Diffusion Models with Semantically Degraded Conditions
shilong han ⋅ Yuming Zhang ⋅ Hongxia Wang
Abstract
Classifier-Free Guidance (CFG) is a cornerstone of modern text-to-image models, yet its reliance on a semantically vacuous null prompt ($\varnothing$) generates a guidance signal prone to geometric entanglement. This is a key factor limiting its precision, leading to well-documented failures in complex compositional tasks. We propose Condition-Degradation Guidance (CDG), a novel paradigm that replaces the null prompt with a strategically degraded condition, $c_{deg}$ . This reframes guidance from a coarse "good vs. null" contrast to a more refined "good vs. almost good" discrimination, thereby compelling the model to capture fine-grained semantic distinctions. To synthesize $\boldsymbol{c}_{\text{deg}}$ adaptively, our method models the self-attention mechanism as a graph and employs Weighted PageRank to identify and degrade the most semantically salient tokens. Validated on state-of-the-art models like Stable Diffusion 3, CDG markedly improves compositional accuracy and text-image alignment, addressing key failure modes of the baseline. As a lightweight, plug-and-play module, it achieves this with negligible computational overhead. Our work challenges the reliance on static, information-sparse negative samples and establishes a new principle for diffusion guidance: the construction of adaptive, semantically-aware negative samples is critical to achieving precise semantic control.
Successful Page Load