Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Setting up ML Evaluation Standards to Accelerate Progress

Rethinking Machine Learning Model Evaluation in Pathology

Aaditya Prakash · Dinkar Juyal · Syed Javed · Zahil Shanis · Shreya chakraborty · Harsha pokkalla


Abstract:

Machine Learning has been applied to pathology images in research and clinical practice with promising outcomes. However, standard ML models often lack the high quality or rigorous evaluation required for clinical decisions. Most of these models trained on natural images are also ill-equipped to deal with pathology images that are significantly large and noisy, require expensive labeling, are hard to interpret, and are susceptible to spurious correlations. We propose a set of highly relevant and practical guidelines for ML evaluations in pathology that address the above concerns. The paper includes measures for setting up the evaluation framework, efficiently dealing with variability in labels, and a recommended suite of tests to address issues related to domain shift, robustness, and confounding variables. We hope that the proposed framework will bridge the gap between ML researchers and domain experts, leading to wider adoption of ML techniques in pathology and improving patient outcomes.

Chat is not available.