Skip to yearly menu bar Skip to main content


Poster

Multi-Group Proportional Representations for Text-to-Image Models

Sangwon Jung · Alex Oesterling · Claudio Mayrink Verdun · Sajani Vithana · Taesup Moon · Flavio Calmon


Abstract:

Text-to-image generative models can create vivid, realistic images from textual descriptions. As these models proliferate, they expose new concerns about their ability to represent diverse demographic groups, propagate stereotypes, and efface minority populations. Despite growing attention to the "safe" and "responsible" design of artificial intelligence (AI), there is no established methodology to systematically measure and control representational harms in large image generation models. This paper introduces a novel framework to measure the representation of intersectional groups in images generated by text-to-image generative models. We propose a novel application of the Multi-Group Proportional Representation (MPR) metric to rigorously evaluate representative harms in image generation and develop an algorithm to optimize generative models for this representational metric. MPR evaluates the worst-case deviation of representation statistics across given population groups in images produced by a generative model, allowing for flexible and context-specific measurements based on user requirements. Through experiments, we demonstrate that MPR can effectively measure representation statistics across multiple intersectional groups and, when used as a training objective, can guide models toward a more balanced generation across demographic groups while maintaining generation quality.

Live content is unavailable. Log in and register to view live content