CUE: Concept-Aware Multi-Label Expansion to Mitigate Concept Confusion in Long-Tailed Learning
Ruichi Zhang ⋅ Chikai Shang ⋅ jiacheng yang ⋅ Mengke Li ⋅ Yang Zhou ⋅ Junlong Gao ⋅ Yang Lu
Abstract
Long-tailed distributions are common in real-world recognition tasks, where a few head classes have many samples while most tail classes have very few. Recently, fine-tuning foundation models for long-tailed learning has gained attention due to their excellent performance. However, most existing methods focus solely on mitigating long-tailed distribution bias while overlooking concept confusion caused by the long-tailed distribution. In this paper, we study this problem and attribute it to the mutual exclusivity of single-label supervision under long-tailed distributions, which suppresses feature sharing among related classes and amplifies the dominance of head classes, leading to disrupted inter-class discriminality. To address this, we propose $\textbf{CUE}$, $\underline{C}$oncept-aware m$\underline{U}$lti-label $\underline{E}$xpansion, which introduces multi-label concept signals to preserve disrupted inter-class relationships. Specifically, CUE constructs concept sets by $\textbf{(i)}$ extracting instance-level visual cues from zero-shot CLIP and $\textbf{(ii)}$ generating class-level semantic cues with LLM; the two cues are incorporated via separately weighted Binary Logit-Adjustmen (BLA) auxiliary losses and jointly optimized with the baseline Logit-Adjustmen (LA) loss. Experiments on several long-tailed benchmarks, CUE achieves balanced and strong performance, surpassing recent state-of-the-art methods. The code is available in the supplementary materials.
Successful Page Load