Skip to yearly menu bar Skip to main content


Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation

Lin Long · Haobo Wang · Zhijie Jiang · Lei Feng · Chang Yao · Gang Chen · Junbo Zhao

Arch 4A-E Poster #350
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Positive-Unlabeled (PU) learning aims to train a binary classifier using minimal positive data supplemented by a substantially larger pool of unlabeled data, in the specific absence of explicitly annotated negatives. Despite its straightforward nature as a binary classification task, the currently best-performing PU algorithms still largely lag behind the supervised counterpart. In this work, we identify that the primary bottleneck lies in the difficulty of deriving discriminative representations under unreliable binary supervision with poor semantics, which subsequently hinders the common label disambiguation procedures. To cope with this problem, we propose a novel PU learning framework, namely Latent Group-Aware Meta Disambiguation (LaGAM), which incorporates a hierarchical contrastive learning module to extract the underlying grouping semantics within PU data and produce compact representations. As a result, LaGAM enables a more aggressive label disambiguation strategy, where we enhance the robustness of training by iteratively distilling the true labels of unlabeled data directly through meta-learning. Extensive experiments show that LaGAM significantly outperforms the current state-of-the-art methods by an average of 6.8\% accuracy on common benchmarks, approaching the supervised baseline. We also provide comprehensive ablations as well as visualized analysis to verify the effectiveness of our LaGAM.

Live content is unavailable. Log in and register to view live content