Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Fairness and Accuracy under Domain Generalization

Thai-Hoang Pham · Xueru Zhang · Ping Zhang

MH1-2-3-4 #137

Keywords: [ Social Aspects of Machine Learning ] [ equal opportunity ] [ equalized odds ] [ js-divergence ] [ regularization ] [ invariant representation ] [ domain generalization ] [ fairness ] [ accuracy ]


Abstract:

As machine learning (ML) algorithms are increasingly used in high-stakes applications, concerns have arisen that they may be biased against certain social groups. Although many approaches have been proposed to make ML models fair, they typically rely on the assumption that data distributions in training and deployment are identical. Unfortunately, this is commonly violated in practice and a model that is fair during training may lead to an unexpected outcome during its deployment. Although the problem of designing robust ML models under dataset shifts has been widely studied, most existing works focus only on the transfer of accuracy. In this paper, we study the transfer of both fairness and accuracy under domain generalization where the data at test time may be sampled from never-before-seen domains. We first develop theoretical bounds on the unfairness and expected loss at deployment, and then derive sufficient conditions under which fairness and accuracy can be perfectly transferred via invariant representation learning. Guided by this, we design a learning algorithm such that fair ML models learned with training data still have high fairness and accuracy when deployment environments change. Experiments on real-world data validate the proposed algorithm.

Chat is not available.