Skip to yearly menu bar Skip to main content


Spotlight Poster

Learning the greatest common divisor: explaining transformer predictions

François Charton

Halle B #262
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: The predictions of small transformers, trained to calculate the greatest common divisor (GCD) of two positive integers, can be fully characterized by looking at model inputs and outputs.As training proceeds, the model learns a list $\mathcal D$ of integers, products of divisors of the base used to represent integers and small primes, and predicts the largest element of $\mathcal D$ that divides both inputs. Training distributions impact performance. Models trained from uniform operands only learn a handful of GCD (up to $38$ GCD $\leq100$). Log-uniform operands boost performance to $73$ GCD $\leq 100$, and a log-uniform distribution of outcomes (i.e. GCD) to $91$. However, training from uniform (balanced) GCD breaks explainability.

Live content is unavailable. Log in and register to view live content