Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Malign Overfitting: Interpolation and Invariance are Fundamentally at Odds

Yoav Wald · Gal Yona · Uri Shalit · Yair Carmon

Keywords: [ Theory ] [ benign overfitting ] [ invariance ] [ overparameterization ] [ robustness ] [ fairness ]


Abstract:

Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work, we provide a theoretical justification for these observations. We prove that---even in the simplest of settings---any interpolating learning rule (with an arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that---in the same setting---successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.

Chat is not available.