Skip to yearly menu bar Skip to main content


Poster

Combining Axes Preconditioners through Kronecker Approximation for Deep Learning

Venkata Sai Surya Subramanyam Duvvuri · Fnu Devvrit · Rohan Anil · Cho-Jui Hsieh · Inderjit Dhillon

Halle B #164
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Adaptive regularization based optimization methods such as full-matrix Adagrad which use gradient second-moment information hold significant potential for fast convergence in deep neural network (DNN) training, but are memory intensive and computationally demanding for large neural nets. We develop a technique called Combining AxeS PReconditioners (CASPR), which optimizes matrix-shaped DNN parameters by finding different preconditioners for each mode/axis of the parameter and combining them using a Kronecker-sum based approximation. We show tighter convergence guarantees in stochastic optimization compared to a Kronecker product based preconditioner, Shampoo, which arises as a special case of CASPR. Furthermore, our experiments demonstrates that CASPR approximates the gradient second-moment matrix in full-matrix Adagrad more accurately, and shows significant improvement in training and generalization performance compared to existing practical adaptive regularization based methods such as Shampoo and Adam in a variety of tasks including graph neural network on OGBG-molpcba, Transformer on a universal dependencies dataset and auto-regressive large language modeling on C4 dataset.

Live content is unavailable. Log in and register to view live content