Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

A view of mini-batch SGD via generating functions: conditions of convergence, phase transitions, benefit from negative momenta.

Maksim Velikanov · Denis Kuznedelev · Dmitry Yarotsky

MH1-2-3-4 #137

Keywords: [ Theory ] [ ntk ] [ analytic framework ] [ linear models ] [ optimization ] [ sgd ]


Abstract:

Mini-batch SGD with momentum is a fundamental algorithm for learning large predictive models. In this paper we develop a new analytic framework to analyze noise-averaged properties of mini-batch SGD for linear models at constant learning rates, momenta and sizes of batches. Our key idea is to consider the dynamics of the second moments of model parameters for a special family of "Spectrally Expressible" approximations. This allows to obtain an explicit expression for the generating function of the sequence of loss values. By analyzing this generating function, we find, in particular, that 1) the SGD dynamics exhibits several convergent and divergent regimes depending on the spectral distributions of the problem; 2) the convergent regimes admit explicit stability conditions, and explicit loss asymptotics in the case of power-law spectral distributions; 3) the optimal convergence rate can be achieved at negative momenta. We verify our theoretical predictions by extensive experiments with MNIST and synthetic problems, and find a good quantitative agreement.

Chat is not available.