Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

The SSL Interplay: Augmentations, Inductive Bias, and Generalization

Vivien Cabannes · Bobak Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti

Keywords: [ SSL ] [ theory for practitioners ]


Abstract:

Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory.

Chat is not available.