Skip to yearly menu bar Skip to main content


Paper
in
Workshop: Domain Generalization: Evolution, Breakthroughs, and Future Horizons

Confidence-calibrated covariate shift correction for few-shot classification in Vision-Language Models

BEHRAJ KHAN · Rizwan Qureshi · M Nouman Durrani · Tahir Qasim Syed


Abstract: Since the establishment of vision-language foundation models as the new mainstay in low-shot vision classification tasks, the question of domain generalization arising from insufficient target data is assuming more importance. This scarcity challenge induces sampling bias and amplifies their sensitivity to variations and shifts in data distributions. While fine-tuning on multiple domains can mitigate such domain generalization issues, it is resource-intensive and demands diverse data sources. In this work, we systematically analyze two critical challenges: (1) covariate shift between the pre-training distribution and the underspecified target distribution, and (2) confidence misalignment, where predictions on novel data are overconfident. To address both challenges simultaneously, we introduce Confidence-Calibrated Covariate Shift Correction ($CalShift$)—a unified approach that combines a Fisher information penalty to mitigate covariate shift and a Confidence Misalignment Penalty (CMP) to reduce overconfidence in misclassified examples. Experimental evaluations across various vision and covariate shift benchmarks demonstrate that $CalShift$ significantly improves model calibration, achieving up to a 5.82\% reduction in Expected Calibration Error (ECE). Furthermore, $CalShift$ enhances robustness, improving accuracy by 3.5\% on challenging datasets impacted by covariate shifts. Our results highlight $CalShift$ as a promising strategy for building robust and reliable low-shot vision-language systems for real-world applications.

Chat is not available.