Skip to yearly menu bar Skip to main content


Understanding and Improving Source-free Domain Adaptation from a Theoretical Perspective

Yu Mitsuzumi · Akisato Kimura · Hisashi Kashima

Arch 4A-E Poster #421
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


Source-free Domain Adaptation (SFDA) is an emerging and challenging research area that addresses the problem of unsupervised domain adaptation (UDA) without source data. Though numerous successful methods have been proposed for SFDA, a theoretical understanding of why these methods work well is still absent. In this paper, we shed light on the theoretical perspective of existing SFDA methods. Specifically, we find that SFDA loss functions comprising discriminability and diversity losses work in the same way as the training objective in the theory of self-training based on the expansion assumption, which shows the existence of the target error bound. This finding brings two novel insights that enable us to build an improved SFDA method comprising 1) Model Training with Auto-Adjusting Diversity Constraint and 2) Augmentation Training with Teacher-Student Framework, yielding a better recognition performance. Extensive experiments on three benchmark datasets demonstrate the validity of the theoretical analysis and our method.

Live content is unavailable. Log in and register to view live content