Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

Seonghyeon Ye · Doyoung Kim · Joel Jang · Joongbo Shin · Minjoon Seo

Keywords: [ large language models ] [ natural language processing ] [ zeroshot language models ]


Abstract:

Instruction-tuning, which fine-tunes the language model (LM) on various downstream tasks with task instruction, has improved the zero-shot task generalization performance. However, instruction-tuned LMs still struggle to generalize to challenging unseen tasks containing novel labels. In this paper, we propose Flipped Learning, an alternative method of instruction-tuning which trains the LM to generate the task instruction given the input instance and label. During inference, the LM trained with Flipped Learning, referred to as FLIPPED, selects the label option that is most likely to generate the task instruction. On 14 tasks of the BIG-bench benchmark, the 11B-sized FLIPPED outperforms zero-shot T0-11B and even a 16 times larger 3-shot GPT-3 (175B) on average by 8.4% and 9.7% points, respectively. Flipped Learning gives particularly large improvements on tasks with unseen labels, outperforming T0-11B by up to +20% average F1 score. This indicates that the strong task generalization of Flipped Learning comes from improved generalization to novel labels.

Chat is not available.