Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Concept Gradient: Concept-based Interpretation Without Linear Assumption

Andrew Bai · Chih-Kuan Yeh · Neil Lin · Pradeep K Ravikumar · Cho-Jui Hsieh

MH1-2-3-4 #153

Keywords: [ Social Aspects of Machine Learning ] [ Concept-based interpretation ] [ XAI ] [ interpretability ]


Abstract:

Concept-based interpretations of black-box models are often more intuitive for humans to understand. The most widely adopted approach for concept-based, gradient interpretation is Concept Activation Vector (CAV). CAV relies on learning a linear relation between some latent representation of a given model and concepts. The premise of meaningful concepts lying in a linear subspace of model layers is usually implicitly assumed but does not hold true in general. In this work we proposed Concept Gradient (CG), which extends concept-based, gradient interpretation methods to non-linear concept functions. We showed that for a general (potentially non-linear) concept, we can mathematically measure how a small change of concept affects the model’s prediction, which is an extension of gradient-based interpretation to the concept space. We demonstrated empirically that CG outperforms CAV in attributing concept importance on real world datasets and performed case study on a medical dataset. The code is available at github.com/jybai/concept-gradients.

Chat is not available.