Skip to yearly menu bar Skip to main content


Short Oral
in
Workshop: Trustworthy Machine Learning for Healthcare

Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data

Maresa Schröder · Alireza Zamanian · Narges Ahmidi


Abstract:

Saliency methods provide visual explainability for deep image processing models by highlighting informative regions in the input images based on feature-wise (pixels) importance scores. These methods have been adopted to the time series domain, aiming to highlight important temporal regions in a sequence. This paper identifies, for the first time, the systematic failure of such methods in the time series domain when underlying patterns (e.g., dominant frequency or trend) are based on latent information rather than temporal regions. The latent feature importance postulation is highly relevant for the medical domain as many medical signals, such as EEG signals or sensor data for gate analysis, are commonly assumed to be related to the frequency domain. To the best of our knowledge, no existing post-hoc explainability method can highlight influential latent information for a classification problem. Hence, in this paper, we frame and analyze the problem of latent feature saliency detection. We first assess the explainability quality of multiple state-of-the-art saliency methods (Integrated Gradients, DeepLift, Kernel SHAP, Lime) on top of various classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training) using simulated time series data with underlying temporal or latent space patterns. In conclusion, we identify that Integrated Gradients and DeepLift, if redesigned, could be potential candidates for latent saliency scores.

Chat is not available.