Skip to yearly menu bar Skip to main content


Poster

Directional Label Diffusion Model for Learning from Noisy Labels

Senyu Hou · Gaoxia Jiang · Jia Zhang · Shangrong Yang · Husheng Guo · Yaqing Guo · Wenjian Wang


Abstract:

In image classification, the label quality of training data critically influences model generalization, especially for deep neural networks (DNNs). Traditionally, learning from noisy labels (LNL) can improve the generalization of DNNs through complex architectures or a series of robust techniques, but its performance improvement is limited by the discriminative paradigm. Unlike traditional ways, we resolve the LNL problems from the perspective of robust label generation, based on diffusion models within the generative paradigm. To expand the diffusion model into a robust classifier that explicitly accommodates more noise knowledge, we propose a Directional Label Diffusion (DLD) model. It disentangles the diffusion process into two paths, i.e., directional diffusion and random diffusion. Specifically, directional diffusion simulates the corruption of true labels into a directed noise distribution, prioritizing the removal of likely noise, whereas random diffusion introduces inherent randomness to support label recovery. This architecture enable DLD to gradually infer labels from an initial random state, interpretably diverging from the specified noise distribution. To adapt the model to diverse noisy environments, we design a low-cost label pre-correction method that automatically supplies more accurate label information to the diffusion model, without requiring manual intervention or additional iterations. In addition, we optimize the paradigm for introducing feature conditions into the diffusion model and provide rigorous theoretical deduction. Our approach outperforms state-of-the-art methods on both simulated and real-world noisy datasets.

Live content is unavailable. Log in and register to view live content