Skip to yearly menu bar Skip to main content


Poster

"What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection

Anshuman Chhabra · Peizhao Li · Prasant Mohapatra · Hongfu Liu

Halle B #204
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Oral presentation: Oral 7B
Fri 10 May 1 a.m. PDT — 1:45 a.m. PDT

Abstract:

Classification models are ubiquitously deployed in society and necessitate high utility, fairness, and robustness performance. Current research efforts mainly focus on improving model architectures and learning algorithms on fixed datasets to achieve this goal. In contrast, in this paper, we address an orthogonal yet crucial problem: given a fixed convex learning model (or a convex surrogate for a non-convex model) and a function of interest, we assess what data benefits the model by interpreting the feature space, and then aim to improve performance as measured by this function. To this end, we propose the use of influence estimation models for interpreting the classifier's performance from the perspective of the data feature space. Additionally, we propose data selection approaches based on influence that enhance model utility, fairness, and robustness. Through extensive experiments on synthetic and real-world datasets, we validate and demonstrate the effectiveness of our approaches not only for conventional classification scenarios, but also under more challenging scenarios such as distribution shifts, fairness poisoning attacks, utility evasion attacks, online learning, and active learning.

Live content is unavailable. Log in and register to view live content