Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Diversify and Disambiguate: Out-of-Distribution Robustness via Disagreement

Yoonho Lee · Huaxiu Yao · Chelsea Finn

MH1-2-3-4 #63

Keywords: [ Deep Learning and representational learning ] [ underspecification ] [ ensembles ] [ spurious correlations ] [ ambiguity ] [ Out-of-distribution robustness ]


Abstract:

Real-world machine learning problems often exhibit shifts between the source and target distributions, in which source data does not fully convey the desired behavior on target inputs. Different functions that achieve near-perfect source accuracy can make differing predictions on test inputs, and such ambiguity makes robustness to distribution shifts challenging. We propose DivDis, a simple two-stage framework for identifying and resolving ambiguity in data. DivDis first learns a diverse set of hypotheses that achieve low source loss but make differing predictions on target inputs. We then disambiguate by selecting one of the discovered functions using additional information, for example, a small number of target labels. Our experimental evaluation shows improved performance in subpopulation shift and domain generalization settings, demonstrating that DivDis can scalably adapt to distribution shifts in image and text classification benchmarks.

Chat is not available.