Skip to yearly menu bar Skip to main content


Poster

Efficient Video Super-Resolution for Real-time Rendering with Decoupled G-buffer Guidance

Mingjun Zheng · Long Sun · Jiangxin Dong · Jinshan Pan


Abstract:

Latency is a key driver for real-time rendering applications, making super-resolution techniques increasingly popular to accelerate rendering processes. In contrast to existing methods that directly concatenate low-resolution frames and G-buffers as input without discrimination, we develop an asymmetric UNet-based super-resolution network with decoupled G-buffer guidance, dubbed \textbf{RDG}, to facilitate the spatial and temporal feature exploration for minimizing performance overheads and latency. We first propose a dynamic feature modulator (DFM) to selectively encode the spatial information for capturing a precise structural information. We then incorporate auxiliary G-buffer information to guide the decoder to generate detail-rich, temporally stable results. Specifically, we adopt a high-frequency feature booster (HFB) to adaptively transfer the high-frequency information from the normal and bidirectional reflectance distribution function (BRDF) components of the G-buffer, enhancing the details of the generated results. To further enhance the temporal stability, we design a cross-frame temporal refiner (CTR) with depth and motion vector constraints to aggregate the previous and current frames. Extensive experimental results reveal that our proposed method is capable of generating high-quality and temporally stable results in real-time rendering. The proposed RDG-s produces \textbf{1080P} rendering results on a RTX 3090 GPU with a speed of \textbf{126 FPS}.

Live content is unavailable. Log in and register to view live content