Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Understanding Train-Validation Split in Meta-Learning with Neural Networks

Xinzhe Zuo · Zixiang Chen · Huaxiu Yao · Yuan Cao · Quanquan Gu

Keywords: [ Deep Learning and representational learning ] [ deep learning ] [ convolutional neural network ] [ neural networks ] [ train-validation split ] [ meta-learning ]


Abstract:

The goal of meta-learning is to learn a good prior model from a collection of tasks such that the learned prior is able to adapt quickly to new tasks without accessing many data from the new tasks. A common practice in meta-learning is to perform a train-validation split on each task, where the training set is used for adapting the model parameter to that specific task and the validation set is used for learning a prior model that is shared across all tasks. Despite its success and popularity in multitask learning and few-shot learning, the understanding of the train-validation split is still limited, especially when the neural network models are used. In this paper, we study the benefit of train-validation split for classification problems with neural network models trained by gradient descent. We prove that the train-validation split is necessary to learn a good prior model when the noise in the training sample is large, while the train-train method fails. We validate our theory by conducting experiment on both synthetic and real datasets. To the best of our knowledge, this is the first work towards the theoretical understanding of train-validation split in meta-learning with neural networks.

Chat is not available.