Skip to yearly menu bar Skip to main content


Poster

Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion

Konyul Park · Yecheol Kim · Daehun Kim · Jun Won Choi


Abstract:

Modern autonomous driving perception systems utilize complementary multi-modal sensors, such as LiDAR and cameras. Although sensor fusion architectures enhance performance in challenging environments, they still suffer significant performance drops under severe sensor failures, such as LiDAR beam reduction, LiDAR drop, limited field of view, camera drop, and occlusion. This limitation stems from inter-modality dependencies in current sensor fusion frameworks. In this study, we introduce an efficient and robust LiDAR-camera 3D object detector, referred to as Immortal, which can achieve robust performance through a mixture of experts approach. Our Immortal fully decouples modality dependencies using three parallel expert decoders, which use camera features, LiDAR features, or a combination of both to decode object queries, respectively. We propose Mixture of Modal Experts (MoME) framework, where each query is decoded selectively using one of three expert decoders. MoME utilizes an Adaptive Query Router (AQR) to select the most appropriate expert decoder for each query based on the quality of camera and LiDAR features. This ensures that each query is processed by the best-suited expert, resulting in robust performance across diverse sensor failure scenarios. We evaluated the performance of Immortal on the nuScenes-R benchmark. Our Immortal achieved state-of-the-art performance in extreme weather and sensor failure conditions, significantly outperforming the existing models across various sensor failure scenarios.

Live content is unavailable. Log in and register to view live content