From Softmax to Dirichlet: Evidential Learning for Semi-supervised Semantic Segmentation
Abstract
The critical challenge of semi-supervised semantic segmentation lies in how to fully exploit a large volume of unlabeled data to improve the model's generalization performance for robust segmentation. However, existing softmax scores-based filtering methods tend to be affected by the overconfidence issue in neural networks, leading to the inclusion of incorrect pseudo-labels that negatively impact the training process. In this paper, we propose a novel evidential learning framework to explicitly model the prediction uncertainty for reliable pseudo-label selection. By modeling the distribution of class probabilities using Dirichlet distributions, we obtain principled and improved uncertainty estimates from a distributional perspective. Furthermore, we propose HESS (Hyper-ESS), decoupling the modeling of exclusive and collective evidence for comprehensive evidence perception, to yield more accurate uncertainty estimates. Extensive experiments on three challenging benchmarks demonstrate that integrating HESS into existing semi-supervised semantic segmentation frameworks consistently improves performance, benefiting from more reliable pseudo-label selection. Our work sheds light on the potential of evidential learning in semi-supervised semantic segmentation and opens up new avenues for future research. Code and models will be made available to facilitate future research.