Skip to yearly menu bar Skip to main content


Spotlight Poster

Linearity of Relation Decoding in Transformer Language Models

Evan Hernandez · Arnab Sen Sharma · Tal Haklay · Kevin Meng · Martin Wattenberg · Jacob Andreas · Yonatan Belinkov · David Bau

Halle B #131
[ ] [ Project Page ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Much of the knowledge encoded in transformer language models (LMs) may be expressed in terms of relations: relations between words and their synonyms, entities and their attributes, etc. We show that, for a subset of relations, this computation is well-approximated by a single linear transformation on the subject representation. Linear relation representations may be obtained by constructing a first-order approximation to the LM from a single prompt, and they exist for a variety of factual, commonsense, and linguistic relations. However, we also identify many cases in which LM predictions capture relational knowledge accurately, but this knowledge is not linearly encoded in their representations. Our results thus reveal a simple, interpretable, but heterogeneously deployed knowledge representation strategy in transformer LMs.

Live content is unavailable. Log in and register to view live content