Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Improving Foundation Models for Few-Shot Learning via Multitask Finetuning

Zhuoyan Xu · Zhenmei Shi · Junyi Wei · Yin Li · Yingyu Liang

Keywords: [ Foundation model ] [ few-shot learning ] [ Multitask finetuning ] [ contrastive learning ]


Abstract:

Foundation models have become essential tools for AI. In this paper, we study the problem of adapting foundation models, pre-trained using contrastive learning, to downstream tasks with limited labels. We explore the paradigm of finetuning a foundation model before adapting to a target task, using a set of related tasks with a few labeled samples. We show both theoretically and empirically that with a diverse set of related tasks this finetuning leads to reduced error in the target task, when compared with directly adapting the same pre-trained model, e.g., at least 6\% target accuracy improvements on the miniImageNet.

Chat is not available.