Skip to yearly menu bar Skip to main content


Poster

Rethinking Training for De-biasing Text-to-Image Generation: Unlocking the Potential of Stable Diffusion

Eunji Kim · Siwon Kim · Minjun Park · Rahim Entezari · Sungroh Yoon


Abstract:

Recent advancements in text-to-image models, such as Stable Diffusion, show significant demographic biases. Existing de-biasing techniques rely heavily on additional training, which imposes high computational costs and risks of compromising core image generation functionality. This hinders them from being widely adopted to real-world applications. In this paper, we explore Stable Diffusion's overlooked potential to reduce bias without requiring additional training. Through our analysis, we uncover that initial noises associated with minority attributes form minority regions' rather than scattered. We view theseminority regions' as opportunities in SD to reduce bias. To unlock the potential, we propose a novel de-biasing method called `weak guidance,' carefully designed to guide a random noise to the minority regions without compromising semantic integrity. Through analysis and experiments on various versions of SD, we demonstrate that our proposed approach effectively reduces bias without additional training, achieving both efficiency and preservation of core image generation functionality.

Live content is unavailable. Log in and register to view live content