Skip to yearly menu bar Skip to main content


Poster

Random Conditioning for Diffusion Model Compression with Distillation

Dohyun Kim · Sehwan Park · GeonHee Han · Seung Wook Kim · Paul Hongsuck Seo


Abstract:

Diffusion models have emerged as a cornerstone of generative modeling, capable of producing high-quality images through a progressive denoising process. However, their remarkable performance comes with substantial computational costs, driven by large model sizes and the need for multiple sampling steps. Knowledge distillation, a popular approach for model compression, transfers knowledge from a complex teacher model to a simpler student model. While extensively studied for recognition tasks, its application to diffusion models—especially for generating unseen concepts absent from training images—remains relatively unexplored. In this work, we propose a novel approach called random conditioning, which pairs noised images with randomly chosen text conditions to enable efficient, image-free knowledge distillation. By leveraging random conditioning, we show that it is possible to generate unseen concepts not included in the training data. When applied to conditional diffusion model distillation, This method enables the student model to effectively explore the condition space, leading to notable performance gains. Our approach facilitates the resource-efficient deployment of generative diffusion models, broadening their accessibility for both research and practical applications.

Live content is unavailable. Log in and register to view live content