Skip to yearly menu bar Skip to main content


Poster

On the Variance of the Adaptive Learning Rate and Beyond

Liyuan Liu · Jianfeng Gao · Xz W · Weizhu Chen · Xiaodong Liu · Haoming Jiang · Heng Ji


Abstract:

The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.

Chat is not available.