Skip to yearly menu bar Skip to main content


Paper
in
Workshop: 7th Safe Artificial Intelligence for All Domains (SAIAD)

Out-of-Distribution Detection with Adversarial Outlier Exposure

Thomas Botschen · Konstantin Kirchheim · Frank Ortmeier


Abstract:

Machine learning models typically perform reliably only on inputs drawn from the distribution they were trained on, making Out-of-Distribution (OOD) detection essential for safety-critical applications. While exposing models to example outliers during training is one of the most effective ways to enhance OOD detection, recent studies suggest that synthetically generated outliers can also act as regularizers for deep neural networks. In this paper, we propose an augmentation scheme for synthetic outliers that regularizes a classifier’s energy function by adversarially lowering the outliers’ energy during training. We demonstrate that our method improves OOD detection performance and improves adversarial robustness on OOD data on several image classification benchmarks. Additionally, we show that our approach preserves in-distribution generalization. Our code is publicly available.

Chat is not available.