Making Training-Free Diffusion Segmentors Scale with the Generative Power
Abstract
As powerful generative models, text-to-image diffusion models have recently been explored for discriminative tasks as well. A line of research focuses on adapting a pre-trained diffusion model to semantic segmentation without any further training, leading to what we call training-free diffusion segmentors. These methods typically rely on cross-attention maps from the model’s attention layers, which are assumed to capture semantic relationships between image pixels and text tokens. Ideally, such approaches should benefit from more powerful diffusion models, \textit{i.e.}, stronger generative capability should lead to better segmentation. However, we observe that existing methods often fail to scale accordingly, and in some cases, segmentation performance even degrades when using more powerful models. To understand this issue, we identify two underlying gaps: (i) cross-attention is computed across multiple heads and layers, but there exists a discrepancy between these individual attention maps and a unified global representation. (ii) Even when a global map is available, it does not directly translate to accurate semantic correlation for segmentation, due to score imbalances among different text tokens. To bridge these gaps, we propose two techniques: auto aggregation and per-pixel rescaling, which together enable training-free segmentation to better leverage model capability. We extensively evaluate our approach on standard semantic segmentation benchmarks and further integrate it into an advanced generative framework, demonstrating both its broad applicability and improved performance.