Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding

Tianyang Hu · Zhili LIU · Fengwei Zhou · Wenjia Wang · Weiran Huang

Keywords: [ Unsupervised and Self-supervised learning ] [ theoretical understanding ] [ stochastic neighbor embedding ] [ contrastive learning ]


Abstract: Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data. In this work, we contribute to the theoretical understanding of SSCL and uncover its connection to the classic data visualization method, stochastic neighbor embedding (SNE), whose goal is to preserve pairwise distances. From the perspective of preserving neighboring information, SSCL can be viewed as a special case of SNE with the input space pairwise similarities specified by data augmentation. The established correspondence facilitates deeper theoretical understanding of learned features of SSCL, as well as methodological guidelines for practical improvement. Specifically, through the lens of SNE, we provide novel analysis on domain-agnostic augmentations, implicit bias and robustness of learned features. To illustrate the practical advantage, we demonstrate that the modifications from SNE to $t$-SNE can also be adopted in the SSCL setting, achieving significant improvement in both in-distribution and out-of-distribution generalization.

Chat is not available.