Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 4th Workshop on practical ML for Developing Countries: learning under limited/low resource settings

Model Compression Beyond Size Reduction

Mubarek Mohammed


Abstract:

With the current set-up, the success of Deep Neural Network models is highly tiedto their size. Although this property might help them improve their performance, itmakes them difficult to train, deploy them on resource-constrained machines, anditerate on experiments. There is also a growing concern about their environmentaland economic impacts. Model Compression is a set of techniques that are appliedto reduce the size of models without a significant loss in performance. Their use isincreasing as models grow with time. However, these techniques alter the behaviorof the network beyond reducing its size. This paper aims to draw attention tothe matter by highlighting present works with regard to Explaniability, NeuralArchitecture Search, and Fairness before finalizing with a suggestion for futureresearch directions.

Chat is not available.