Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference

SOUVIK KUNDU · Shunlin Lu · Yuke Zhang · Jacqueline Liu · Peter Beerel

MH1-2-3-4 #95

Keywords: [ Social Aspects of Machine Learning ] [ machine learning as a service ] [ Efficient private inference ] [ Cryptographic inference ] [ automated ReLU reduction ] [ efficient cryptographic inference ]


Abstract:

The large number of ReLU non-linearity operations in existing deep neural networks makes them ill-suited for latency-efficient private inference (PI). Existing techniques to reduce ReLU operations often involve manual effort and sacrifice significant accuracy. In this paper, we first present a novel measure of non-linearity layers’ ReLU sensitivity, enabling mitigation of the time-consuming manual efforts in identifying the same. Based on this sensitivity, we then present SENet, a three-stage training method that for a given ReLU budget, automatically assigns per-layer ReLU counts, decides the ReLU locations for each layer’s activation map, and trains a model with significantly fewer ReLUs to potentially yield latency and communication efficient PI. Experimental evaluations with multiple models on various datasets show SENet’s superior performance both in terms of reduced ReLUs and improved classification accuracy compared to existing alternatives. In particular, SENet can yield models that require up to ∼2× fewer ReLUs while yielding similar accuracy. For a similar ReLU budget SENet can yield models with ∼2.32% improved classification accuracy, evaluated on CIFAR-100.

Chat is not available.