Poster
CamFreeDiff: Camera-free Image to Panorama Generation with Diffusion Model
Xiaoding Yuan · Shitao Tang · Kejie Li · Peng Wang
[
Abstract
]
Abstract:
This paper introduces Camera-free Diffusion (CamFreeDiff) model for image outpainting from a single camera-free image and text description. This method distinguishes itself from existing strategies, such as MVDiffusion, by eliminating the requirement for predefined camera poses. CamFreeDiff seamlessly incorporates a mechanism for predicting homography within the multi-view diffusion framework. The key component of our approach is to formulate camera estimation by directly predicting the homography transformation from the input view to the predefined canonical view. In contrast to the direct two-stage approach of image transformation and outpainting, CamFreeDiff utilizes predicted homography to establish point-level correspondences between the input view and the target panoramic view. This enables consistency through correspondence-aware attention, which is learned in a fully differentiable manner. Qualitative and quantitative experimental results demonstrate the strong robustness and performance of CamFreeDiff for image outpainting in the challenging context of camera-free inputs.
Live content is unavailable. Log in and register to view live content