Skip to yearly menu bar Skip to main content


Poster

Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration

Haipeng Fang · Sheng Tang · Juan Cao · Enshuo Zhang · Fan Tang · Tong-yee Lee


Abstract:

Diffusion transformers have shown exceptional performance in visual generation but are accompanied by high computational costs. Token reduction techniques that compress models by sharing the denoising process among similar tokens have been introduced. However, existing approaches neglect the denoising priors of the diffusion models, leading to suboptimal acceleration and diminished image quality. This study proposes a novel concept: attend to prune feature redundancies in areas not attended by the diffusion process. We analyze the location and degree of feature redundancies based on the structure-then-detail denoising priors. Subsequently, we introduce SDTM, a structure-then-detail token merging approach that dynamically compresses feature redundancies. Specifically, we design dynamic visual token merging, compression ratio adjusting, and prompt reweighting for different stages. Served in a post-training way, the proposed method can be integrated seamlessly into DiT architecture. Extensive experiments across various backbones, schedulers, and datasets showcase the superiority of our method, which achieves 1.55 times acceleration with negligible impact on image quality.

Live content is unavailable. Log in and register to view live content