Skip to yearly menu bar Skip to main content


Poster

A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks

Sanjeev Arora · Nadav Cohen · Noah Golowich · Wei Hu

Great Hall BC #75

Keywords: [ non-convex optimization ] [ learning theory ] [ deep learning ]


Abstract:

We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018).

Live content is unavailable. Log in and register to view live content