Skip to yearly menu bar Skip to main content


Learning with Structural Labels for Learning with Noisy Labels

Noo-ri Kim · Jin-Seop Lee · Jee-Hyong Lee

Arch 4A-E Poster #332
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


Deep Neural Networks (DNNs) have demonstrated remarkable performance across diverse domains and tasks with large-scale datasets. To reduce labeling costs, semi-automated and crowdsourcing labeling methods are developed, but their labels are inevitably noisy. Learning with Noisy Labels (LNL) approaches aim to train DNNs despite the presence of noisy labels. These approaches leverage the memorization effect to acquire more accurate labels through a process of relabeling and selection, subsequently using these refined labels for next training. However, these methods encounter a significant decrease in the model's generalization performance due to the inevitably existing noise labels. To overcome this limitation, we propose a new approach to enhance learning with noisy labels by incorporating additional distribution information—structural labels. In order to leverage additional distribution information for generalization, we utilize a reverse k-NN, which helps the model achieve a simpler feature manifold and avoid overfitting to noisy labels. The proposed method shows outperformed performance in multiple benchmark datasets with synthetic and real-world datasets.

Live content is unavailable. Log in and register to view live content