Poster
No Thing, Nothing: Highlighting Safety-Critical Classes for Robust LiDAR Semantic Segmentation
Junsung Park · HwiJeong Lee · Inha Kang · Hyunjung Shim
Existing domain generalization methods for LiDAR semantic segmentation under adverse weather struggle to accurately predict "things" categories compared to "stuff'' categories. In typical driving scenes, "things" categories can be dynamic and associated with higher collision risks, making them crucial for safe navigation and planning. Recognizing the importance of "things'' categories, we identify their performance drop as a serious bottleneck in existing approaches. We observed that adverse weather induces degradation of semantic-level features and both corruption of local features, leading to a misprediction of "things" as "stuff". To address semantic-level feature corruption, we bind each point feature to its superclass, preventing the misprediction of \emph{things} classes into visually dissimilar categories. Additionally, to enhance robustness against local corruption caused by adverse weather, we define each LiDAR beam as a local region and propose a regularization term that aligns the clean data with its corrupted counterpart in feature space. Our method achieves state-of-the-art performance with a +2.6 mIoU gain on the SemanticKITTI-to-SemanticSTF benchmark and +7.9 mIoU on the SemanticPOSS-to-SemanticSTF benchmark. Notably, our method achieves a +4.8 and +7.9 mIoU improvement on "things'' classes, respectively, highlighting its effectiveness.
Live content is unavailable. Log in and register to view live content