Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

When Source-Free Domain Adaptation Meets Learning with Noisy Labels

Li Yi · Gezheng Xu · Pengcheng Xu · Jiaqi Li · Ruizhi Pu · Charles Ling · Ian McLeod · Boyu Wang

MH1-2-3-4 #144

Keywords: [ Unsupervised and Self-supervised learning ] [ unsupervised domain adaptation ] [ Source-Free Domain Adaptation ] [ Noisy Label Learning ]


Abstract:

Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA.

Chat is not available.