Skip to yearly menu bar Skip to main content


Image Neural Field Diffusion Models

Yinbo Chen · Oliver Wang · Richard Zhang · Eli Shechtman · Xiaolong Wang · MichaĆ«l Gharbi

Arch 4A-E Poster #311
award Highlight
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


Diffusion models have shown an impressive ability to model complex data distributions, with several key advantages over GANs, such as stable training, better coverage of the training distribution's modes, and the ability to solve inverse problems without extra training. However, most diffusion models learn the distribution of fixed-resolution images. We propose to learn the distribution of continuous images by training diffusion models on image neural fields, which can be rendered at any resolution, and show its advantages over fixed-resolution models. To achieve this, a key challenge is to obtain a latent space that represents photorealistic image neural fields. We propose a simple and effective method, inspired by several recent techniques but with key changes to make the image neural fields photorealistic. Our method can be used to convert existing latent diffusion autoencoders into image neural field autoencoders. We show that image neural field diffusion models can be trained using mixed-resolution image datasets, outperform fixed-resolution diffusion models followed by super-resolution models, and can solve inverse problems with conditions applied at different scales efficiently.

Live content is unavailable. Log in and register to view live content