Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Learning for Code

Show Your Work: Scratchpads for Intermediate Computation with Language Models

Maxwell Nye · Anders J Andreassen · Guy Gur-Ari · Henryk Michalewski · Jacob Austin · David Bieber · David Dohan · Aitor Lewkowycz · Maarten Bosma · David Luan · Charles Sutton · Augustus Odena


Abstract:

Large pre-trained language models perform remarkably well on tasks that can be done "in one pass", such as generating realistic text or synthesizing computer programs. However, they struggle with tasks that require unbounded multi-step computation, such as adding integers or executing programs. Surprisingly, we find that these same models are able to perform complex multi-step computations - even in the few-shot regime - when asked to perform the operation "step by step", showing the results of intermediate computations.In particular, we train Transformers to perform multi-step computations by asking them to emit intermediate computation steps into a "scratchpad". We hypothesize that by providing supervision on the intermediate computation steps, the model gains additional learning signal on how to systematically generalize from small computations to larger ones. On a series of increasingly complex tasks ranging from long addition to the execution of arbitrary programs, we show that scratchpads dramatically improve the ability of language models to perform multi-step computations, even when we care only about the final result. Even though the model is required to predict many more tokens, it is still better at predicting the final results, because the individual prediction steps are easier. We believe that this result provides an early indication of the potential power of intermediate computation within language models.

Chat is not available.