Skip to yearly menu bar Skip to main content


Poster

Uncertainty Weighted Gradients for Model Calibration

Jinxu Lin · Linwei Tao · Minjing Dong · Chang Xu


Abstract:

Model calibration is essential for ensuring that the predictions of deep neural networks accurately reflect true probabilities in real-world classification tasks.However, deep networks often produce over-confident or under-confident predictions, leading to miscalibration.Various methods have been proposed to address this issue by designing effective loss functions for calibration, such as focal loss. In this paper, we analyze its effectiveness and provide a unified loss framework of focal loss and its variants, where we mainly attribute their superiority in model calibration to the loss weighting factor that estimates sample-wise uncertainty.Based on our analysis, existing loss functions fail to achieve optimal calibration performance due to two main issues: including misalignment in optimization and insufficient precision in uncertainty estimation.Specifically, focal loss cannot align sample uncertainty with gradient scaling and the single logit cannot indicate the uncertainty.To address these issues, we reformulate the optimization from the perspective of gradients, which focuses on uncertain samples. Meanwhile, we propose to use the Brier Score as the loss weight factor, which provides a more accurate uncertainty estimation via all the logits. Extensive experiments on various models and datasets demonstrate that our method achieves state-of-the-art (SOTA) performance.

Live content is unavailable. Log in and register to view live content