Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Bias Propagation in Federated Learning

Hongyan Chang · Reza Shokri

MH1-2-3-4 #72

Keywords: [ Algorithmic Bias ] [ fairness ] [ federated learning ]


Abstract:

We show that participating in federated learning can be detrimental to group fairness. In fact, the bias of a few parties against under-represented groups (identified by sensitive attributes such as gender or race) can propagate through the network to all the parties in the network. We analyze and explain bias propagation in federated learning on naturally partitioned real-world datasets. Our analysis reveals that biased parties unintentionally yet stealthily encode their bias in a small number of model parameters, and throughout the training, they steadily increase the dependence of the global model on sensitive attributes. What is important to highlight is that the experienced bias in federated learning is higher than what parties would otherwise encounter in centralized training with a model trained on the union of all their data. This indicates that the bias is due to the algorithm. Our work calls for auditing group fairness in federated learning and designing learning algorithms that are robust to bias propagation.

Chat is not available.