Skip to yearly menu bar Skip to main content


Poster

NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows

Zhenggang Tang · Jason Ren · Xiaoming Zhao · Bowen Wen · Jonathan Tremblay · Stan Birchfield · Alexander G. Schwing


Abstract:

We present a method for automatically modifying a NeRF representation based on a single observation of a non-rigid transformed version of the original scene.Our method defines the transformation as a 3D flow,specifically as a weighted linear blending of rigid transformations of 3D anchor points that are defined on the surface of the scene.In order to identify anchor points, we introduce a novel correspondence algorithm that first matches RGB-based pairs,then leverages multi-view information and 3D reprojection to robustly filter false positives in two steps. We also introduce a new dataset for exploring the problem of modifying a NeRF scene through a single observation. Our dataset contains 113 scenes leveraging 47 3D assets. We show that our proposed method outperforms NeRF editing methods as well as diffusion-based methods,and we also explore different methods for filtering correspondences.

Live content is unavailable. Log in and register to view live content