Skip to yearly menu bar Skip to main content


Poster

Usable Information and Evolution of Optimal Representations During Training

Michael Kleinman · Alessandro Achille · Daksh Idnani · Jonathan Kao

Virtual

Keywords: [ learning dynamics ] [ Usable Information ] [ initialization ] [ representation learning ] [ sgd ]


Abstract:

We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training. We show that the implicit regularization coming from training with Stochastic Gradient Descent with a high learning-rate and small batch size plays an important role in learning minimal sufficient representations for the task. In the process of arriving at a minimal sufficient representation, we find that the content of the representation changes dynamically during training. In particular, we find that semantically meaningful but ultimately irrelevant information is encoded in the early transient dynamics of training, before being later discarded. In addition, we evaluate how perturbing the initial part of training impacts the learning dynamics and the resulting representations. We show these effects on both perceptual decision-making tasks inspired by neuroscience literature, as well as on standard image classification tasks.

Chat is not available.