Poster
PatchVSR: Breaking Video Diffusion Resolution Limits with Patch-wise Video Super-Resolution
Shian Du · Menghan Xia · Chang Liu · Xintao Wang · Jing Wang · Pengfei Wan · Di ZHANG · Xiangyang Ji
[
Abstract
]
Abstract:
Pre-trained video generation models hold great potential for generative video super-resolution (VSR). However, adapting them for full-size VSR, as most existing methods do, suffers from unnecessary intensive full-attention computation and fixed output resolution. To overcome these limitations, we make the first exploration into utilizing video diffusion priors for patch-wise VSR.This is non-trivial because pre-trained video diffusion models are not native for patch-level detail generation. To mitigate this challenge, we propose an innovative approach, called \textit{PatchVSR}, which integrates a dual-stream adapter for conditional guidance. The patch branch extracts features from input patches to maintain content fidelity while the global branch extracts context features from the resized full video to bridge the generation gap caused byincomplete semantics of patches.Particularly, we also inject the patch's location information into the model to better contextualize patch synthesis within the global video frame.Experiments demonstrate that our method can synthesize high-fidelity, high-resolution details at the patch level. A tailor-made multi-patch joint modulation is proposed to ensure visual consistency across individually enhanced patches. Due to the flexibility of our patch-based paradigm, we can achieve highly competitive 4K VSR based on a 512512 resolution base model, with extremely high efficiency.
Live content is unavailable. Log in and register to view live content