Skip to yearly menu bar Skip to main content


Fixed Point Diffusion Models

Luke Melas-Kyriazi · Xingjian Bai

Arch 4A-E Poster #449
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


We introduce the Fixed Point Diffusion Model (FPDM), a novel approach to image generation that integrates the concept of fixed point solving into the framework of diffusion-based generative modeling. Our approach embeds an implicit fixed point solving layer into the denoising network of a diffusion model, transforming the diffusion process into a sequence of closely-related fixed point problems. Combined with a new stochastic training method, this approach significantly reduces model size, reduces memory usage, and accelerates training. Moreover, it enables the development of two new techniques to improve sampling efficiency: reallocating computation across timesteps and reusing fixed point solutions between timesteps. We conduct extensive experiments with state-of-the-art models on ImageNet, FFHQ, CelebA-HQ, and LSUN-Church, demonstrating substantial improvements in performance and efficiency. Compared to the state-of-the-art DiT model, FPDM contains 87\% fewer parameters, consumes 60\% less memory during training, and improves image generation quality in situations where sampling computation or time is limited.

Live content is unavailable. Log in and register to view live content