Guiding a Diffusion Model by Swapping Its Tokens
Abstract
Classifier-Free Guidance (CFG) is a widely used inference-time technique to boost the image quality of diffusion models. Yet, its reliance on text conditions prevents its use in unconditional generation. We propose a simple method to enable CFG-like guidance for both conditional and unconditional generation. The key idea is to generate a perturbed prediction via simple token swap operations, and use the direction between it and the clean prediction to steer sampling toward higher-fidelity distributions. In practice, we swap pairs of most semantically dissimilar tokens in either spatial or channel dimensions.Unlike existing methods that apply perturbation in a global or less constrained manner, our approach modifies only selected tokens, allowing finer control over perturbation and its influence on generated samples. Experiments on MS-COCO2014, MS-COCO 2017, and ImageNet datasets demonstrate that our Self-Swap Guidance (SSG), when applied to state-of-the-art diffusion models, outperforms previous condition-free methods in image fidelity and prompt alignment under different set-ups. Its fine-grained perturbation granularity also improves robustness, reducing side-effects across a wider range of perturbation strengths. Overall, SSG extends CFG to a broader scope of applications including both conditional and unconditional generation, and can be readily inserted into any diffusion model as a plug-in to gain immediate improvements.