Poster
DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention
Lianghui Zhu · Zilong Huang · Bencheng Liao · Jun Hao Liew · Hanshu Yan · Jiashi Feng · Xinggang Wang
[
Abstract
]
Abstract:
Diffusion models with large-scale pre-training have achieved significant success in the field of visual content generation, particularly exemplified by Diffusion Transformers (DiT). However, DiT models have faced challenges with quadratic complexity efficiency, especially when handling long sequences. In this paper, we aim to incorporate the sub-quadratic modeling capability of Gated Linear Attention (GLA) into the 2D diffusion backbone. Specifically, we introduce Diffusion Gated Linear Attention Transformers (DiG), a simple, adoptable solution with minimal parameter overhead. We offer two variants, i,e, a plain and U-shape architecture, showing superior efficiency and competitive effectiveness. In addition to superior performance to DiT and other sub-quadratic-time diffusion models at resolution, DiG demonstrates greater efficiency than these methods starting from a resolution. Specifically, DiG-S/2 is faster and saves GPU memory compared to DiT-S/2 at a resolution. Additionally, DiG-XL/2 is faster than the Mamba-based model at a resolution and faster than DiT with FlashAttention-2 at a resolution.
Live content is unavailable. Log in and register to view live content