HierUQ: Hierarchical Uncertainty Quantification with Adaptive Granularity Reconciliation for Degraded Image Classification
Abstract
Hierarchical classification (HC) on degraded images presents challenges due to feature corruption, unreliable confidence estimation, and fine-grained misclassification. Existing methods often struggle to balance semantic consistency and adaptive decision paths under low-quality visual conditions. To address this, we propose HierUQ, a unified framework that integrates uncertainty quantification with adaptive granularity reconciliation. A Vision Transformer backbone extracts global features, which are fused with semantic embeddings via bilinear and semantic-guided cross-attentions. We develop a principled Hierarchical Uncertainty Quantification (HUQ) strategy based on label smoothing and proper scoring rules. When confidence is insufficient, a Confidence-Aware Path Adjustment (CAPA) mechanism adaptively rolls back predictions to higher-level nodes, mitigating overclassification and error propagation, stabilizing the learning trajectory, overcoming degradation-induced interference, and enhancing fine-grained classification accuracy. To enhance learning, we introduce a self-paced joint optimization (MLJO) over multi-level objectives with dynamic loss weighting. Experiments on degraded remote sensing and natural image benchmarks show that HierUQ achieves state-of-the-art performance with strong robustness and adaptability.