Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Interpretable Debiasing of Vectorized Language Representations with Iterative Orthogonalization

Prince Aboagye · Yan Zheng · Jack Shunn · Chin-Chia Michael Yeh · Junpeng Wang · Zhongfang Zhuang · Huiyuan Chen · Liang Wang · Wei Zhang · Jeff Phillips

MH1-2-3-4 #72

Keywords: [ Deep Learning and representational learning ] [ pre-trained contextualized embeddings ] [ ethics ] [ static embeddings ] [ natural language processing ] [ bias ] [ debiasing ] [ fairness ]


Abstract:

We propose a new mechanism to augment a word vector embedding representation that offers improved bias removal while retaining the key information—resulting in improved interpretability of the representation. Rather than removing the information associated with a concept that may induce bias, our proposed method identifies two concept subspaces and makes them orthogonal. The resulting representation has these two concepts uncorrelated. Moreover, because they are orthogonal, one can simply apply a rotation on the basis of the representation so that the resulting subspace corresponds with coordinates. This explicit encoding of concepts to coordinates works because they have been made fully orthogonal, which previous approaches do not achieve. Furthermore, we show that this can be extended to multiple subspaces. As a result, one can choose a subset of concepts to be represented transparently and explicitly, while the others are retained in the mixed but extremely expressive format of the representation.

Chat is not available.