Skip to yearly menu bar Skip to main content


Poster

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

Minjia Zhang · Menghao Li · Chi Wang · Mingqin Li

Keywords: [ bayesian inference ] [ Code Compilation ] [ Scalability ] [ Efficient Deep Learning Inference ]


Abstract:

Recently, the DL compiler, together with Learning to Compile has proven to be a powerful technique for optimizing deep learning models. However, existing methods focus on accelerating the convergence speed of the individual tensor operator rather than the convergence speed of the entire model, which results in long optimization time to obtain a desired latency.

In this paper, we present a new method called DynaTune, which provides significantly faster convergence speed to optimize a DNN model. In particular, we consider a Multi-Armed Bandit (MAB) model for the tensor program optimization problem. We use UCB to handle the decision-making of time-slot-based optimization, and we devise a Bayesian belief model that allows predicting the potential performance gain of each operator with uncertainty quantification, which guides the optimization process. We evaluate and compare DynaTune with the state-of-the-art DL compiler. The experiment results show that DynaTune is 1.2--2.4 times faster to achieve the same optimization quality for a range of models across different hardware architectures.

Chat is not available.