Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Self-Supervised Set Representation Learning for Unsupervised Meta-Learning

Dong Bok Lee · Seanie Lee · Kenji Kawaguchi · Yunji Kim · Jihwan Bang · Jung-Woo Ha · Sung Ju Hwang

MH1-2-3-4 #165

Keywords: [ Unsupervised and Self-supervised learning ] [ Set Representation Learning ] [ self-supervised learning ] [ Unsupervised Meta-learning ]


Abstract:

Unsupervised meta-learning (UML) essentially shares the spirit of self-supervised learning (SSL) in that their goal aims at learning models without any human supervision so that the models can be adapted to downstream tasks. Further, the learning objective of self-supervised learning, which pulls positive pairs closer and repels negative pairs, also resembles metric-based meta-learning. Metric-based meta-learning is one of the most successful meta-learning methods, which learns to minimize the distance between representations from the same class. One notable aspect of metric-based meta-learning, however, is that it is widely interpreted as a set-level problem since the inference of discriminative class prototypes (or set representations) from few examples is crucial for the performance of downstream tasks. Motivated by this, we propose Set-SimCLR, a novel self-supervised set representation learning framework for targeting UML problem. Specifically, our Set-SimCLR learns a set encoder on top of instance representations to maximize the agreement between two sets of augmented samples, which are generated by applying stochastic augmentations to a given image. We theoretically analyze how our proposed set representation learning can potentially improve the generalization performance at the meta-test. We also empirically validate its effectiveness on various benchmark datasets, showing that Set-SimCLR largely outperforms both UML and instance-level self-supervised learning baselines.

Chat is not available.