Evidential Deep Partial Label Learning to Quantify Disambiguation Uncertainty
Abstract
Partial label learning (PLL) is a weakly supervised learning, where each instance is assigned a set of candidate labels and only one is true. However, due to potentially inaccurate annotations, existing PLL algorithms disambiguate labeling by minimizing the prediction loss, which leaves the model unaware of its prediction credibility. To address this issue, this paper proposes the evidential deep partial label learning (ED-PLL) to quantify disambiguation uncertainty, aiming to achieve candidate label disambiguation and reliability prediction. Firstly, we extend the evidence modeling mechanism to PLL, treating the candidate label set as the source of evidence for the label hypothesis, and using belief and credibility to model classification uncertainty, thereby guiding a more reliable disambiguation process. Meanwhile, we propose the expectation calculation under the Dirichlet distribution of non-candidate labels, which suppresses the output of non-candidate labels by using consistency regularization to further improve the accuracy of disambiguation. Furthermore, a conflict-aware regularization is proposed to evaluate the degree of conflict, which measures the consistency between instances within the class by combining the differences in the distribution of prediction results and model uncertainty, and thus improves the robustness of the model. In addition, this paper theoretically analyzes our method from the perspective of the Expectation-Maximization (EM) algorithm, and the ED-PLL is compatible with any deep network or stochastic optimizer. Experiments on benchmark and real datasets verify the effectiveness of the proposed algorithm.