Skip to yearly menu bar Skip to main content


Poster

ONDA-Pose: Occlusion-Aware Neural Domain Adaptation for Self-Supervised 6D Object Pose Estimation

Tao Tan ยท Qiulei Dong


Abstract:

Self-supervised 6D object pose estimation has received increasing attention in computer vision recently. Some typical works in literature attempt to translate the synthetic images with object pose labels generated by object CAD models into the real domain, and then use the translated data for training. However, their performance is generally limited, since (i) there still exists a domain gap between the translated images and the real images and (ii) the translated images can not sufficiently reflect occlusions that exist in many real images. To address these problems, we propose an Occlusion-Aware Neural Domain Adaptation method for self-supervised 6D object Pose estimation, called ONDA-Pose. The proposed method comprises three main steps. Firstly, by utilizing both the training real images without pose labels and a CAD model, we explore a CAD-like radiance field for rendering corresponding synthetic images that have similar textures to those generated by the CAD model. Then, a backbone pose estimator trained on the synthetic data is employed to provide initial pose estimations for the synthetic images rendered from the CAD-like radiance field, and the initial object poses are refined by a global object pose refiner to generate pseudo object pose labels. Finally, the backbone pose estimator is further self-supervised as the final pose estimator by jointly utilizing the real images with pseudo object pose labels and the synthetic images rendered from the CAD-like radiance field. Experimental results on three public datasets demonstrate that ONDA-Pose significantly outperforms the comparative state-of-the-art methods in most cases.

Live content is unavailable. Log in and register to view live content