Skip to yearly menu bar Skip to main content


Poster

Federated Learning Based on Dynamic Regularization

Durmus Alp Emre Acar · Yue Zhao · Ramon Matas · Matthew Mattina · Paul Whatmough · Venkatesh Saligrama

Keywords: [ distributed optimization ] [ deep neural networks ] [ federated learning ]


Abstract:

We propose a novel federated learning method for distributively training neural network models, where the server orchestrates cooperation between a subset of randomly chosen devices in each round. We view Federated Learning problem primarily from a communication perspective and allow more device level computations to save transmission costs. We point out a fundamental dilemma, in that the minima of the local-device level empirical loss are inconsistent with those of the global empirical loss. Different from recent prior works, that either attempt inexact minimization or utilize devices for parallelizing gradient computation, we propose a dynamic regularizer for each device at each round, so that in the limit the global and device solutions are aligned. We demonstrate both through empirical results on real and synthetic data as well as analytical results that our scheme leads to efficient training, in both convex and non-convex settings, while being fully agnostic to device heterogeneity and robust to large number of devices, partial participation and unbalanced data.

Chat is not available.