Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for IoT: Datasets, Perception, and Understanding

A NEW FRAMEWORK FOR TRAINING IN-NETWORK LEARNING MODELS OVER DISCRETE CHANNELS

Abdellatif Zaidi · Matei Moldoveanu · Abderrezak Rachedi


Abstract:

In-network learning (INL) has emerged as a new paradigm in machine learning (ML) that allows multiple nodes to train a joint ML model without sharing the raw data. In INL the nodes jointly construct a hyper-ML model formed of ML sub-model located at each node. These sub-models are trained jointly, without sharing their raw data, using a distributed version of the classical backpropagation technique. A disadvantage of these backpropagation techniques is that when the communication between the nodes is done over discrete channels, the parameters of the sub-models are not updated due to the lack of gradient. In this paper, we present a new framework for training INL models over discreet channels. The framework builds on the straight through gradient estimator by adapting the quantisation points, or codebooks, to the optimisation problem at hand while also compensating for the error introduced by the gradient estimation. We perform experiments showing that our proposed framework can achieve similar performance to models trained on continuous channels, while also significantly reducing the amount of data communicated between nodes.

Chat is not available.