Poster
SpatialDreamer: Self-supervised Stereo Video Synthesis from Monocular Input
Zhen Lv · Yangqi Long · Congzhentao Huang · Cao Li · Chengfei Lv · Hao Ren · Dian Zheng
[
Abstract
]
Abstract:
Stereo video synthesis from a monocular input is a demanding task in the fields of spatial computing and virtual reality. The main challenges of this task lie on the insufficiency of high-quality paired stereo videos for training and the difficulty of maintaining the spatio-temporal consistency between frames. Existing methods mainly handle these problems by directly applying novel view synthesis methods to video, which is naturally unsuitable. In this paper, we introduce a novel self-supervised stereo video synthesis paradigm via a video diffusion model, termed , which meets the challenges head-on. Firstly, to address the stereo video data insufficiency, we propose a epth based ideo eneration module , which employs a forward-backward rendering mechanism to generate paired videos with geometric and temporal priors. Leveraging data generated by DVG, we propose RefinerNet along with a self-supervised synthetic framework designed to facilitate efficient and dedicated training.More importantly, we devise a consistency control module, which consists of a metric of stereo deviation strength and a emporal nteraction earning module for geometric and temporal consistency ensurance respectively. We evaluated the proposed method against various benchmark methods, with the results showcasing its superior performance.
Live content is unavailable. Log in and register to view live content