Skip to yearly menu bar Skip to main content


Poster

One Forward is Enough for Neural Network Training via Likelihood Ratio Method

Jinyang Jiang · Zeliang Zhang · Chenliang Xu · Zhaofei Yu · Yijie Peng

Halle B #117
[ ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

While backpropagation (BP) is the mainstream approach for gradient computation in neural network training, its heavy reliance on the chain rule of differentiation constrains the designing flexibility of network architecture and training pipelines. We avoid the recursive computation in BP and develop a unified likelihood ratio (ULR) method for gradient estimation with only one forward propagation. Not only can ULR be extended to train a wide variety of neural network architectures, but the computation flow in BP can also be rearranged by ULR for better device adaptation. Moreover, we propose several variance reduction techniques to further accelerate the training process. Our experiments offer numerical results across diverse aspects, including various neural network training scenarios, computation flow rearrangement, and fine-tuning of pre-trained models. All findings demonstrate that ULR effectively enhances the flexibility of neural network training by permitting localized module training without compromising the global objective and significantly boosts the network robustness.

Live content is unavailable. Log in and register to view live content