Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions

Daniel D Johnson · Ayoub El Hanchi · Chris Maddison

Keywords: [ Unsupervised and Self-supervised learning ] [ spectral clustering ] [ representation learning ] [ self-supervised learning ] [ kernel PCA ] [ eigenfunction ] [ minimax optimal ] [ Markov chain ] [ positive definite ] [ invariance ] [ contrastive learning ] [ kernel ]


Abstract:

Contrastive learning is a powerful framework for learning self-supervised representations that generalize well to downstream supervised tasks. We show that multiple existing contrastive learning methods can be reinterpeted as learning kernel functions that approximate a fixed positive-pair kernel. We then prove that a simple representation obtained by combining this kernel with PCA provably minimizes the worst-case approximation error of linear predictors, under a straightforward assumption that positive pairs have similar labels. Our analysis is based on a decomposition of the target function in terms of the eigenfunctions of a positive-pair Markov chain, and a surprising equivalence between these eigenfunctions and the output of Kernel PCA. We give generalization bounds for downstream linear prediction using our kernel PCA representation, and show empirically on a set of synthetic tasks that applying kernel PCA to contrastive learning models can indeed approximately recover the Markov chain eigenfunctions, although the accuracy depends on the kernel parameterization as well as on the augmentation strength.

Chat is not available.