Skip to yearly menu bar Skip to main content


Poster

On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks

HyunJun Jung · Patrick Ruhkamp · Guangyao Zhai · Nikolas Brasch · Yitong Li · Yannick Verdie · Jifei Song · Yiren Zhou · Anil Armagan · Slobodan Ilic · Aleš Leonardis · Nassir Navab · Benjamin Busam

West Building Exhibit Halls ABC 074

Abstract:

Learning-based methods to solve dense 3D vision problems typically train on 3D sensor data. The respectively used principle of measuring distances provides advantages and drawbacks. These are typically not compared nor discussed in the literature due to a lack of multi-modal datasets. Texture-less regions are problematic for structure from motion and stereo, reflective material poses issues for active sensing, and distances for translucent objects are intricate to measure with existing hardware. Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities. These effects remain unnoticed if the sensor measurement is considered as ground truth during the evaluation. This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction. We rigorously show the significant impact of sensor characteristics on the learned predictions and notice generalisation issues arising from various technologies in everyday household environments. For evaluation, we introduce a carefully designed dataset comprising measurements from commodity sensors, namely D-ToF, I-ToF, passive/active stereo, and monocular RGB+P. Our study quantifies the considerable sensor noise impact and paves the way to improved dense vision estimates and targeted data fusion.

Chat is not available.