Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation

Bobby He · James Martens · Guodong Zhang · Aleksandar Botev · Andrew Brock · Samuel L Smith · Yee Whye Teh

MH1-2-3-4 #80

Keywords: [ Deep Learning and representational learning ] [ self-attention ] [ signal propagation ] [ rank collapse ] [ layer normalisation ] [ residual connections ] [ deep transformers ] [ neural networks and kernels ] [ positional encoding ]


Abstract:

Skip connections and normalisation layers form two standard architectural components that are ubiquitous for the training of Deep Neural Networks (DNNs), but whose precise roles are poorly understood. Recent approaches such as Deep Kernel Shaping have made progress towards reducing our reliance on them, using insights from wide NN kernel theory to improve signal propagation in vanilla DNNs (which we define as networks without skips or normalisation). However, these approaches are incompatible with the self-attention layers present in transformers, whose kernels are intrinsically more complicated to analyse and control. And so the question remains: \emph{is it possible to train deep vanilla transformers?} We answer this question in the affirmative by designing several approaches that use combinations of parameter initialisations, bias matrices and location-dependent rescaling to achieve faithful signal propagation in vanilla transformers. Our methods address various intricacies specific to signal propagation in transformers, including the interaction with positional encoding and causal masking. In experiments on WikiText-103 and C4, our approaches enable deep transformers without normalisation to train at speeds matching their standard counterparts, and deep vanilla transformers to reach the same performance as standard ones after about 5 times more iterations.

Chat is not available.