Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Workshop on Distributed and Private Machine Learning

Biased Client Selection for Improved Convergence of Federated Learning

Gauri Joshi


Abstract:

Abstract: Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Previous works have analyzed the convergence of federated learning by assuming unbiased client participation, where clients are selected such that the aggregated model update is unbiased. In this paper, we present the first convergence analysis of federated optimization for biased client selection and quantify how the selection skew affects convergence speed. We discover from the convergence analysis that biasing client selection towards clients with higher local loss yields faster error convergence. Using this insight, we propose the power-of-choice client selection framework that can flexibly span the trade-off between convergence speed and solution bias. Extensive experiments demonstrate that power-of-choice strategies can converge up to 3x faster and give 10% higher test accuracy than the baseline random selection.

Bio: Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Previously, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016, advised by Prof. Gregory Wornell. She received her B.Tech and M.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the NSF CAREER Award (2021), ACM Sigmetrics Best Paper Award (2020), NSF CRII Award (2018), IBM Faculty Research Award (2017), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).