Skip to yearly menu bar Skip to main content


Poster

ProxSGD: Training Structured Neural Networks under Regularization and Constraints

Yang Yang · Yaxiong Yuan · Avraam Chatzimichailidis · Lei Lei · Symeon Chatzinotas · Ruud Van Sloun


Abstract:

In this paper, we consider the problem of training neural networks (NN). To promote a NN with specific structures, we explicitly take into consideration the nonsmooth regularization (such as L1-norm) and constraints (such as interval constraint). This is formulated as a constrained nonsmooth nonconvex optimization problem, and we propose a convergent proximal-type stochastic gradient descent (Prox-SGD) algorithm. We show that under properly selected learning rates, momentum eventually resembles the unknown real gradient and thus is crucial in analyzing the convergence. We establish that with probability 1, every limit point of the sequence generated by the proposed Prox-SGD is a stationary point. Then the Prox-SGD is tailored to train a sparse neural network and a binary neural network, and the theoretical analysis is also supported by extensive numerical tests.

Chat is not available.