LESA: Learnable Stage-Aware Predictors for Diffusion Model Acceleration
Peiliang Cai ⋅ Jiacheng Liu ⋅ Haowen Xu ⋅ Xinyu Wang ⋅ Chang Zou ⋅ Linfeng Zhang
Abstract
Diffusion models have achieved remarkable success in image and video generation tasks. However, the high computational demands of Diffusion Transformers (DiTs) pose a significant challenge to their practical deployment. While feature caching is a promising acceleration strategy, existing methods based on simple reusing or training-free forecasting struggle to adapt to the complex, stage-dependent dynamics of the diffusion process, often resulting in quality degradation and failing to maintain consistency with the standard denoising process. To address this, we propose a \textbf{LE}arnable \textbf{S}tage-\textbf{A}ware (\textbf{LESA}) predictor framework based on two-stage training. Our approach leverages a Kolmogorov–Arnold Network (KAN) to accurately learn temporal feature mappings from data. We further introduce a multi-stage, multi-expert architecture that assigns specialized predictors to different noise-level stages, enabling more precise and robust feature forecasting. Extensive experiments show our method achieves significant acceleration while maintaining high-fidelity generation. Experiments demonstrate 5.00$\times$ acceleration on FLUX.1-dev with minimal quality degradation (1.0\% drop), 6.25$\times$ speedup on Qwen-Image with a 20.2\% quality improvement over the previous SOTA (TaylorSeer), and 5.00$\times$ acceleration on HunyuanVideo with a 24.7\% PSNR improvement over TaylorSeer. State-of-the-art performance on both text-to-image and text-to-video synthesis validates the effectiveness and generalization capability of our training-based framework across different models. Our code is included in the supplementary materials and will be released on GitHub.
Successful Page Load