Skip to yearly menu bar Skip to main content


Poster

3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds

Aoran Xiao · Jiaxing Huang · Weihao Xuan · Ruijie Ren · Kangcheng Liu · Dayan Guan · Abdulmotaleb El Saddik · Shijian Lu · Eric P. Xing

West Building Exhibit Halls ABC 110

Abstract:

Robust point cloud parsing under all-weather conditions is crucial to level-5 autonomy in autonomous driving. However, how to learn a universal 3D semantic segmentation (3DSS) model is largely neglected as most existing benchmarks are dominated by point clouds captured under normal weather. We introduce SemanticSTF, an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. We investigate universal 3DSS modeling with two tasks: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalized 3DSS that learns a generalizable model from normal-weather data. Our studies reveal the challenge while existing 3DSS methods encounter adverse-weather data, showing the great value of SemanticSTF in steering the future endeavor along this very meaningful research direction. In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their encoded embeddings, ultimately leading to a generalizable model that effectively improves 3DSS under various adverse weather. The SemanticSTF and related codes are available at https://github.com/xiaoaoran/SemanticSTF.

Chat is not available.