Joint Video Multi-Frame Interpolation and Deblurring Under Unknown Exposure Time

Wei Shang · Dongwei Ren · Yi Yang · Hongzhi Zhang · Kede Ma · Wangmeng Zuo

West Building Exhibit Halls ABC 151


Natural videos captured by consumer cameras often suffer from low framerate and motion blur due to the combination of dynamic scene complexity, lens and sensor imperfection, and less than ideal exposure setting. As a result, computational methods that jointly perform video frame interpolation and deblurring begin to emerge with the unrealistic assumption that the exposure time is known and fixed. In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame interpolation and deblurring under unknown exposure time. Toward this goal, we first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames. We then train two U-Nets for intra-motion and inter-motion analysis, respectively, adapting to the learned exposure representation via gain tuning. We finally build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement. Extensive experiments on both simulated and real-world datasets show that our optimized method achieves notable performance gains over the state-of-the-art on the joint video ×8 interpolation and deblurring task. Moreover, on the seemingly implausible ×16 interpolation task, our method outperforms existing methods by more than 1.5 dB in terms of PSNR.

Chat is not available.