Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

The Trade-off between Universality and Label Efficiency of Representations from Contrastive Learning

Zhenmei Shi · Jiefeng Chen · Kunyang Li · Jayaram Raghuram · Xi Wu · Yingyu Liang · Somesh Jha

MH1-2-3-4 #147

Keywords: [ Unsupervised and Self-supervised learning ] [ complexity ] [ self-supervised learning ] [ contrastive learning ] [ Foundation model ]


Abstract:

Pre-training representations (a.k.a. foundation models) has recently become a prevalent learning paradigm, where one first pre-trains a representation using large-scale unlabeled data, and then learns simple predictors on top of the representation using small labeled data from the downstream tasks. There are two key desiderata for the representation: label efficiency (the ability to learn an accurate classifier on top of the representation with a small amount of labeled data) and universality (usefulness across a wide range of downstream tasks). In this paper, we focus on one of the most popular instantiations of this paradigm: contrastive learning with linear probing, i.e., learning a linear predictor on the representation pre-trained by contrastive learning. We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously. Specifically, we provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks (improving universality), it puts less emphasis on task-specific features, giving rise to larger sample complexity for down-stream supervised tasks, and thus worse prediction performance. Guided by this analysis, we propose a contrastive regularization method to improve the trade-off. We validate our analysis and method empirically with systematic experiments using real-world datasets and foundation models.

Chat is not available.