Skip to yearly menu bar Skip to main content


Rethinking the Objectives of Vector-Quantized Tokenizers for Image Synthesis

Yuchao Gu · Xintao Wang · Yixiao Ge · Ying Shan · Mike Zheng Shou

Arch 4A-E Poster #275
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


Vector-Quantized (VQ-based) generative models usually consist of two basic components, \textit{i.e.}, VQ tokenizers and generative transformers. Prior research focuses on improving the reconstruction fidelity of VQ tokenizers but rarely examines how the improvement in reconstruction affects the generation ability of generative transformers. In this paper, we find that improving the reconstruction fidelity of VQ tokenizers does not necessarily improve the generation. Instead, learning to compress semantic features within VQ tokenizers significantly improves generative transformers' ability to capture textures and structures. We thus highlight two competing objectives of VQ tokenizers for image synthesis: \textbf{semantic compression} and \textbf{details preservation}. Different from previous work that prioritizes better details preservation, we propose \textbf{Se}mantic-\textbf{Q}uantized GAN (SeQ-GAN) with two learning phases to balance the two objectives. In the first phase, we propose a semantic-enhanced perceptual loss for better semantic compression. In the second phase, we fix the encoder and codebook, but finetune the decoder to achieve better details preservation. Our proposed SeQ-GAN significantly improves VQ-based generative models for both unconditional and conditional image generation. Specifically, SeQ-GAN achieves a Fréchet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on 256×256 ImageNet generation, a remarkable improvement over VIT-VQGAN, which obtains 11.2 FID and 97.2 IS.

Live content is unavailable. Log in and register to view live content