Skip to yearly menu bar Skip to main content


Poster

Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks

Greg Yang · Dingli Yu · Chen Zhu · Soufiane Hayou

Halle B #169
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract: Empirical studies have consistently demonstrated that increasing the size of neural networks often yields superior performance in practical applications. However, there is a lack of consensus regarding the appropriate scaling strategy, particularly when it comes to increasing the depth of neural networks. In practice, excessively large depths can lead to model performance degradation. In this paper, we introduce Depth-$\mu$P, a principled approach for depth scaling, allowing for the training of arbitrarily deep architectures while maximizing feature learning and diversity among nearby layers. Our method involves dividing the contribution of each residual block and the parameter update by the square root of the depth. Through the use of Tensor Programs, we rigorously establish the existence of a limit for infinitely deep neural networks under the proposed scaling scheme. This scaling strategy ensures more stable training for deep neural networks and guarantees the transferability of hyperparameters from shallow to deep models. To substantiate the efficacy of our scaling method, we conduct empirical validation on neural networks with depths up to $2^{10}$.

Live content is unavailable. Log in and register to view live content