Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Depth Separation with Multilayer Mean-Field Networks

Yunwei Ren · Mo Zhou · Rong Ge

MH1-2-3-4 #134

Keywords: [ Theory ] [ mean-field ] [ depth separation ] [ nonconvex optimization ]


Abstract:

Depth separation—why a deeper network is more powerful than a shallow one—has been a major problem in deep learning theory. Previous results often focus on representation power, for example, Safran et al. (2019) constructed a function that is easy to approximate using a 3-layer network but not approximable by any 2-layer network. In this paper, we show that this separation is in fact algorithmic: one can learn the function constructed by Safran et al. (2019) using an overparametrized network with polynomially many neurons efficiently. Our result relies on a new way of extending the mean-field limit to multilayer networks, and a decomposition of loss that factors out the error introduced by the discretization of infinite-width mean-field networks.

Chat is not available.