Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Can We Faithfully Represent Absence States to Compute Shapley Values on a DNN?

Jie Ren · Zhanpeng Zhou · Chen Qirui · Quanshi Zhang

Keywords: [ Social Aspects of Machine Learning ] [ attribution methods ] [ deep neural networks ] [ explainable ai ]


Abstract:

Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample. People usually mask an input variable using its baseline value. However, there is no theory to examine whether baseline value faithfully represents the absence of an input variable, i.e., removing all signals from the input variable. Fortunately, recent studies (Ren et al., 2023a; Deng et al., 2022a) show that the inference score of a DNN can be strictly disentangled into a set of causal patterns (or concepts) encoded by the DNN. Therefore, we propose to use causal patterns to examine the faithfulness of baseline values. More crucially, it is proven that causal patterns can be explained as the elementary rationale of the Shapley value. Furthermore, we propose a method to learn optimal baseline values, and experimental results have demonstrated its effectiveness.

Chat is not available.