Poster
Hyperdimensional Uncertainty Quantification for Multimodal Uncertainty Fusion in Autonomous Vehicles Perception
Luke Chen · Junyao Wang · Trier Mortlock · Pramod Khargonekar · Mohammad Al Faruque
ExHall D Poster #120
[
Abstract
]
Sun 15 Jun 8:30 a.m. PDT
— 10:30 a.m. PDT
Abstract:
Uncertainty Quantification (UQ) is crucial for ensuring the reliability of machine learning models deployed in real-world autonomous systems. However, existing approaches typically quantify task-level output prediction uncertainty without considering epistemic uncertainty at the multimodal feature fusion level, leading to sub-optimal outcomes.Additionally, popular uncertainty quantification methods, e.g., Bayesian approximations, remain challenging to deploy in practice due to high computational costs in training and inference. In this paper, we propose , a novel deterministic uncertainty method (DUM) that efficiently quantifies feature-level epistemic uncertainty by leveraging hyperdimensional computing.Our method captures the channel and spatial uncertainties through channel and patch -wise projection and bundling techniques respectively.Multimodal sensor features are then adaptively weighted to mitigate uncertainty propagation and improve feature fusion.Our evaluations show that on average outperforms the state-of-the-art (SOTA) algorithms by up to 2.01%/1.27% in 3D Object Detection and up to 1.29% improvement over baselines in semantic segmentation tasks under various types of uncertainties.Notably, requires less Floating Point Operations and up to less parameters than SOTA methods, providing an efficient solution for real-world autonomous systems.
Live content is unavailable. Log in and register to view live content