Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

CoRTX: Contrastive Framework for Real-time Explanation

Yu-Neng Chuang · Guanchu Wang · Fan Yang · Quan Zhou · Pushkar Tripathi · Xuanting Cai · Xia Hu

Keywords: [ Social Aspects of Machine Learning ] [ real-time explanation ] [ feature importance ranking ] [ interpretability ] [ explainability ] [ Feature Attribution ]


Abstract:

Recent advancements in explainable machine learning provide effective and faithful solutions for interpreting model behaviors. However, many explanation methods encounter efficiency issues, which largely limit their deployments in practical scenarios. Real-time explainer (RTX) frameworks have thus been proposed to accelerate the model explanation process by learning an one-feed-forward explainer. Existing RTX frameworks typically build the explainer under the supervised learning paradigm, which requires large amounts of explanation labels as the ground truth. Considering that accurate explanation labels are usually hard to obtain, due to constrained computational resources and limited human efforts, effective explainer training is still challenging in practice. In this work, we propose a COntrastive Real-Time eXplanation (CoRTX) framework to learn the explanation-oriented representation and relieve the intensive dependence of explainer training on explanation labels. Specifically, we design a synthetic strategy to select positive and negative instances for explanation representation learning. Theoretical analysis show that our selection strategy can benefit the contrastive learning process on explanation tasks. Experimental results on three real-world datasets further demonstrate the efficiency and efficacy of our proposed CoRTX framework.

Chat is not available.