Accelerating Diffusion-based Video Editing via Heterogeneous Caching: Beyond Full Computing at Sampled Denoising Timestep
Tianyi Liu ⋅ Ye Lu ⋅ Linfeng Zhang ⋅ Chen Cai ⋅ Jianjun Gao ⋅ Yi Wang ⋅ Kim-Hui Yap ⋅ Lap-Pui Chau
Abstract
Diffusion-based video editing has emerged as an important paradigm for high-quality and flexible content generation. However, despite their generality and strong modeling capacity, diffusion transformers remain computationally expensive due to the iterative denoising process, posing challenges for practical deployment. Existing video diffusion acceleration methods primarily exploit denoising timestep-level feature reuse, which mitigates the redundancy in denoising process, but overlooks the architectural redundancy within the Diffusion Transformer (DiT) itself that many attention operations over spatio-temporal tokens are redundantly executed, offering little to no incremental contribution to the model’s output.This work introduces HetCache, a training-free diffusion acceleration framework designed to exploit the inherent heterogeneity in diffusion transformers and video editing tasks. Instead of uniformly reuse or randomly sampling tokens, HetCache assesses the contextual relevance and interaction strength among various types of tokens in designated computing steps. Guided by spatial priors, it divides the spatial-temporal tokens in DiT model into context and generative tokens, and selectively caches the context tokens that exhibit the strongest correlation and most representative semantics with generative ones. This strategy effectively reduces redundant attention operations while maintaining editing consistency and fidelity. Experiments show that HetCache achieves noticeable acceleration includes 2.67$\times$ latency speedup and noticeable FLOPs reduction over commonly used foundation models with negligible degradation in editing quality.
Successful Page Load