Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Task-customized Masked Autoencoder via Mixture of Cluster-conditional Experts

Zhili LIU · Kai Chen · Jianhua Han · Lanqing HONG · Hang Xu · Zhenguo Li · James Kwok

MH1-2-3-4 #143

Keywords: [ Unsupervised and Self-supervised learning ]


Abstract:

Masked Autoencoder (MAE) is a prevailing self-supervised learning method that achieves promising results in model pre-training. However, when the various downstream tasks have data distributions different from the pre-training data, the semantically irrelevant pre-training information might result in negative transfer, impeding MAE’s scalability. To address this issue, we propose a novel MAE-based pre-training paradigm, Mixture of Cluster-conditional Experts (MoCE), which can be trained once but provides customized pre-training models for diverse downstream tasks. Different from the mixture of experts (MoE), our MoCE trains each expert only with semantically relevant images by using cluster-conditional gates. Thus, each downstream task can be allocated to its customized model pre-trained with data most similar to the downstream data. Experiments on a collection of 11 downstream tasks show that MoCE outperforms the vanilla MAE by 2.45\% on average. It also obtains new state-of-the-art self-supervised learning results on detection and segmentation.

Chat is not available.