Skip to yearly menu bar Skip to main content


Poster

Grokking as a First Order Phase Transition in Two Layer Networks

Noa Rubin · Inbar Seroussi · Zohar Ringel

Halle B #213
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

A key property of deep neural networks (DNNs) is their ability to learn new features during training. This intriguing aspect of deep learning stands out most clearly in recently reported Grokking phenomena. While mainly reflected as a sudden increase in test accuracy, Grokking is also believed to be a beyond lazy-learning/Gaussian Process (GP) phenomenon involving feature learning. Here we apply a recent development in the theory of feature learning, the adaptive kernel approach, to two teacher-student models with cubic-polynomial and modular addition teachers. We provide analytical predictions on feature learning and Grokking properties of these models and demonstrate a mapping between Grokking and the theory of phase transitions. We show that after Grokking, the state of the DNN is analogous to the mixed phase following a first-order phase transition. In this mixed phase, the DNN generates useful internal representations of the teacher that are sharply distinct from those before the transition.

Live content is unavailable. Log in and register to view live content