Poster
Simplification Is All You Need against Out-of-Distribution Overconfidence
Keke Tang · Chao Hou · Weilong Peng · Xiang Fang · Zhize Wu · Yongwei Nie · Wenping Wang · Zhihong Tian
Deep neural networks (DNNs) often exhibit out-of-distribution (OOD) overconfidence, producing overly confident predictions on OOD samples. We attribute this issue to the inherent over-complexity of DNNs and investigate two key aspects: capacity and nonlinearity. First, we demonstrate that reducing model capacity through knowledge distillation can effectively mitigate OOD overconfidence. Second, we show that selectively reducing nonlinearity by removing ReLU operations further alleviates the issue. Building on these findings, we present a practical guide to model simplification, combining both strategies to significantly reduce OOD overconfidence. Extensive experiments validate the effectiveness of this approach in mitigating OOD overconfidence and demonstrate its superiority over state-of-the-art methods. Additionally, our simplification strategies can be combined with existing OOD detection techniques to further enhance OOD detection performance. Codes will be made publicly available upon acceptance.
Live content is unavailable. Log in and register to view live content