Skip to yearly menu bar Skip to main content


Poster

Parallelized Autoregressive Visual Generation

Yuqing Wang · Shuhuai Ren · Zhijie Lin · Yujin Han · Haoyuan Guo · Zhenheng Yang · Difan Zou · Jiashi Feng · Xihui Liu


Abstract: Autoregressive models have emerged as a powerful approach for visual generation but suffer from slow inference speed due to their sequential token-by-token prediction process.In this paper, we propose a simple yet effective approach for parallelized autoregressive visual generation that improves generation efficiency while preserving the advantages of autoregressive modeling.Our key insight is that the feasibility of parallel generation is closely tied to visual token dependencies - while tokens with weak dependencies can be generated in parallel, adjacent tokens with strong dependencies are hard to generate together, as independent sampling of strongly correlated tokens may lead to inconsistent decisions.Based on this observation, we develop a parallel generation strategy that generates distant tokens with weak dependencies in parallel while maintaining sequential generation for strongly dependent local tokens. Specifically, we first generate initial tokens in each region sequentially to establish the global structure, then enable parallel generation across distant regions while maintaining sequential generation within each region. Our approach can be seamlessly integrated into standard autoregressive models without modifying the architecture or tokenizer. Experiments on ImageNet and UCF-101 demonstrate that our method achieves a 3.6× speedup with comparable quality and up to 9.5× speedup with minimal quality degradation across both image and video generation tasks.We hope this work will inspire future research in efficient visual generation and unified autoregressive modeling.

Live content is unavailable. Log in and register to view live content