Poster
Link-based Contrastive Learning for One-Shot Unsupervised Domain Adaptation
Yue Zhang · Mingyue Bin · Yuyang Zhang · Zhongyuan Wang · Zhen Han · Chao Liang
Unsupervised domain adaptation (UDA) aims to learn discriminative features from a labeled source domain by supervised learning and to transfer the knowledge to an unlabeled target domain via distribution alignment. However, in some real-world scenarios, e.g., public safety or access control, it’s difficult to obtain sufficient source domain data, which hinders the application of existing UDA methods. To this end, this paper investigates a realistic but rarely studied problem called one-shot unsupervised domain adaptation (OSUDA), where there is only one example per category in the source domain and abundant unlabeled samples in the target domain. Compared with UDA, OSUDA faces dual challenges in both feature learning and domain alignment due to the lack of sufficient source data. To address these challenges, we propose a simple but effective link-based contrastive learning (LCL) method for OSUDA. On the one hand, with the help of in-domain links that indicate whether two samples are from the same cluster, LCL can learn discriminative features with abundant unlabeled target data. On the other hand, by constructing cross-domain links that show whether two clusters are bidirectionally matched, LCL can realize accurate domain alignment with only one source sample per category. Extensive experiments conducted on 4 public domain adaptation benchmarks, including VisDA-2017, Office-31, Office-Home, and DomainNet, demonstrate the effectiveness of the proposed LCL under the OSUDA setting. In addition, we build a realistic OSUDA surveillance video face recognition dataset, where LCL consistently improves the recognition accuracy across various face recognition methods.
Live content is unavailable. Log in and register to view live content