Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Understanding new tasks through the lens of training data via exponential tilting

Subha Maity · Mikhail Yurochkin · Moulinath Banerjee · Yuekai Sun

Keywords: [ Deep Learning and representational learning ] [ model selection ] [ concept drift ] [ out-of-distribution generalization ] [ subpopulation shift ]


Abstract:

Deploying machine learning models on new tasks is a major challenge due to differences in distributions of the train (source) data and the new (target) data. However, the training data likely captures some of the properties of the new task. We consider the problem of reweighing the training samples to gain insights into the distribution of the target task. Specifically, we formulate a distribution shift model based on the exponential tilt assumption and learn train data importance weights minimizing the KL divergence between labeled train and unlabeled target datasets. The learned train data weights can then be used for downstream tasks such as target performance evaluation, fine-tuning, and model selection. We demonstrate the efficacy of our method on Waterbirds and Breeds benchmarks.

Chat is not available.