Skip to yearly menu bar Skip to main content


Poster

#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models

Keming Lu · Hongyi Yuan · Zheng Yuan · Runji Lin · Junyang Lin · Chuanqi Tan · Chang Zhou · Jingren Zhou

Halle B #141
[ ] [ Project Page ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Pre-trained large language models (LLMs) can understand and align with human instructions by supervised fine-tuning (SFT).It is commonly believed that diverse and complex SFT data are of the essence to enable good instruction-following abilities.However, such diversity and complexity are obscure and lack quantitative analyses.In this work, we propose InsTag, an open-set instruction tagging method, to identify semantics and intentions of human instructions by tags that provide access to definitions and quantified analyses of instruction diversity and complexity.We obtain 6.6K fine-grained tags to describe instructions from popular open-sourced SFT datasets comprehensively.We find that the abilities of aligned LLMs benefit from more diverse and complex instructions in SFT data.Based on this observation, we propose a data sampling procedure based on InsTag, and select 6K diverse and complex samples from open-source datasets for SFT.The resulting models, TagLM, outperform open-source models based on considerably larger SFT data evaluated by MT-Bench, echoing the importance of instruction diversity and complexity and the effectiveness of InsTag.InsTag has robust potential to be extended to more applications beyond the data selection as it provides an effective way to analyze the distribution of instructions.

Live content is unavailable. Log in and register to view live content