Poster
AMO Sampler: Enhancing Text Rendering with Overshooting
Xixi Hu · Keyang Xu · Bo Liu · Hongliang Fei · Qiang Liu
Achieving precise alignment between textual instructions and generated images in text-to-image generation is a significant challenge, particularly in rendering written text within images. Open-source models like Stable Diffusion 3 (SD3), Flux, and AuraFlow often struggle with accurate text depiction, resulting in misspelled or inconsistent text. We introduce a training-free method with minimal computational overhead that significantly enhances text rendering quality. Specifically, we introduce an overshooting sampler for a pretrained RF model, by alternating between over-simulating the learned ordinary differential equation (ODE) and reintroducing noise. Compared to the Euler sampler, the overshooting sampler effectively introduces an extra Langevin dynamics term that can help correct the compounding error from successive Euler steps and therefore improve the text rendering. However, when the overshooting strength is high, we observe over-smoothing artifacts on the generated images. To address this issue, we adaptively control the strength of the overshooting for each image patch according to their attention score with the text content. We name the proposed sampler Attention Modulated Overshooting sampler (AMO). AMO demonstrates a 32.3% and 35.9% improvement in text rendering accuracy on SD3 and Flux without compromising overall image quality or increasing inference cost.
Live content is unavailable. Log in and register to view live content