Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning

Rethinking Evaluation for Temporal Link Prediction through Counterfactual Analysis

Aniq Ur Rahman · Alexander Modell · Justin Coon


Abstract:

In response to critiques of existing evaluation methods for temporal link prediction (TLP) models, we propose a novel approach to verify if these models truly capture temporal patterns in the data. Our method involves a sanity check formulated as a counterfactual question: ``What if a TLP model is tested on a temporally distorted version of the data instead of the real data?'' Ideally, a TLP model that effectively learns temporal patterns should perform worse on temporally distorted data compared to real data. We analyse this hypothesis and introduce two temporal distortion techniques to assess six well-known TLP models.

Chat is not available.