Skip to yearly menu bar Skip to main content


Poster
in
Workshop: World Models: Understanding, Modelling and Scaling

Emergent Stack Representations in Modeling Counter Languages Using Transformers

Utkarsh Tiwari · Aviral Gupta · Michael Hahn

Keywords: [ Transformers ] [ Mechanistic Interpretability ] [ AI Safety and Alignment ] [ Formal Languages ] [ Algorithmic Learning ] [ Probing Classifiers ] [ Counter Languages ]


Abstract:

Transformer architectures are the backbone of most modern language models, but understanding the inner workings of these models still largely remains an open problem. One way that research in the past has tackled this problem is by isolating the learning capabilities of these architectures by training them over well-understood classes of formal languages. We extend this literature by analyzing models trained over counter languages, which can be modeled using counter variables. We train transformer models on 4 counter languages, and equivalently formulate these languages using stacks, whose depths can be understood as the counter values. We then probe their internal representations for stack depths at each input token to show that these models when trained as next token predictors learn stack-like representations. This brings us closer to understanding the algorithmic details of how transformers learn languages and helps in circuit discovery.

Chat is not available.