Skip to yearly menu bar Skip to main content


Poster

NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions

Juze Zhang · Haimin Luo · Hongdi Yang · Xinru Xu · Qianyang Wu · Ye Shi · Jingyi Yu · Lan Xu · Jingya Wang

West Building Exhibit Halls ABC 058

Abstract:

Humans constantly interact with objects in daily life tasks. Capturing such processes and subsequently conducting visual inferences from a fixed viewpoint suffers from occlusions, shape and texture ambiguities, motions, etc. To mitigate the problem, it is essential to build a training dataset that captures free-viewpoint interactions. We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of ~71M frames on 10 subjects interacting with 23 objects. To process the HODome dataset, we develop NeuralDome, a layer-wise neural processing pipeline tailored for multi-view video inputs to conduct accurate tracking, geometry reconstruction and free-view rendering, for both human subjects and objects. Extensive experiments on the HODome dataset demonstrate the effectiveness of NeuralDome on a variety of inference, modeling, and rendering tasks. Both the dataset and the NeuralDome tools will be disseminated to the community for further development, which can be found at https://juzezhang.github.io/NeuralDome

Chat is not available.