Skip to yearly menu bar Skip to main content


FreeU: Free Lunch in Diffusion U-Net

Chenyang Si · Ziqi Huang · Yuming Jiang · Ziwei Liu

Arch 4A-E Poster #153
[ ] [ Project Page ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT
Oral presentation: Orals 2A Image & Video Synthesis
Wed 19 Jun 1 p.m. PDT — 2:30 p.m. PDT


In this paper, we uncover the untapped potential of diffusion U-Net, which serves as a "free lunch" that substantially improves the generation quality on the fly. We initially investigate the key contributions of the U-Net architecture to the denoising process and identify that its main backbone primarily contributes to denoising, whereas its skip connections mainly introduce high-frequency features into the decoder module, causing the potential neglect of crucial functions intrinsic to the backbone network. Capitalizing on this discovery, we propose a simple yet effective method, termed ``\textbf{FreeU}'', which enhances generation quality without additional training or finetuning. Our key insight is to strategically re-weight the contributions sourced from the U-Net’s skip connections and backbone feature maps, to leverage the strengths of both components of the U-Net architecture. Promising results on image and video generation tasks demonstrate that our FreeU can be readily integrated to existing diffusion models, e.g., Stable Diffusion, DreamBooth and ControlNet, to improve the generation quality with only a few lines of code. All you need is to adjust two scaling factors during inference.

Live content is unavailable. Log in and register to view live content