Skip to yearly menu bar Skip to main content


Poster
in
Workshop: World Models: Understanding, Modelling and Scaling

Transformers Use Causal World Models in Maze-Solving Tasks

Alexander Spies · William Edwards · Michael Ivanitskiy · Adrians Skapars · Tilman Räuker · Katsumi Inoue · Alessandra Russo · Murray Shanahan

Keywords: [ positional encodings ] [ feature activation ] [ transformers ] [ sparse autoencoders (SAEs) ] [ circuit analysis ] [ residual stream ] [ mechanistic interpretability ] [ model generalization ] [ world models ]


Abstract:

Recent studies in interpretability have explored the inner workings of transformer models trained on tasks across various domains, often discovering that these networks naturally develop highly structured representations. When such representations comprehensively reflect the task domain's structure, they are commonly referred to as ``World Models" (WMs). In this work, we identify WMs in transformers trained on maze-solving tasks. By using Sparse Autoencoders (SAEs) and analyzing attention patterns, we examine the construction of WMs and demonstrate consistency between SAE feature-based and circuit-based analyses. By subsequently intervening on isolated features to confirm their causal role, we find that it is easier to activate features than to suppress them. Furthermore, we find that models can reason about mazes involving more simultaneously active features than they encountered during training; however, when these same mazes (with greater numbers of connections) are provided to models via input tokens instead, the models fail. Finally, we demonstrate that positional encoding schemes appear to influence how World Models are structured within the model’s residual stream.

Chat is not available.