FlashVSR: Towards Real-time Diffusion-Based Streaming Video Super Resolution
Abstract
Diffusion models have recently advanced video restoration, but applying them to real-world and AIGC-generated video super-resolution (VSR) remains challenging due to high latency, prohibitive computation, and poor generalization to ultra-high resolutions. Our goal in this work is to make diffusion-based VSR practical by achieving efficiency, scalability, and near real-time performance. To this end, we propose FlashVSR, the first diffusion-based one-step streaming framework for efficient video super-resolution. FlashVSR runs at approximately 17 FPS for 768×1408 videos on a single A100 GPU by combining three complementary innovations: (i) a train-friendly three-stage distillation pipeline that enables streaming super-resolution, (ii) locality-constrained sparse attention that reduces redundant computation while bridging the train–test resolution gap, and (iii) a tiny conditional decoder that accelerates reconstruction without sacrificing quality. To support large-scale training, we also construct VSR-120K, a new dataset containing 120K videos and 180K images. Extensive experiments demonstrate that FlashVSR scales reliably to ultra-high resolutions and achieves state-of-the-art performance with up to approximately 12× speed-up over prior one-step diffusion-based VSR models. We will release the code, pretrained models, and dataset to foster future research in efficient diffusion-based video super-resolution.