Skip to yearly menu bar Skip to main content


Oral (Contributed Talk)
in
Workshop: Setting up ML Evaluation Standards to Accelerate Progress

A Siren Song of Open Source Reproducibility

Edward Raff · Andrew Farris


Abstract:

As reproducibility becomes a greater concern, conferences have largely converged to a strategy of asking reviewers to indicate whether code was attached to a submission. This is part of a larger trend of taking action based on assumed ideals, without studying if those actions will yield the desired outcome. Our argument is that this focus on code for replication is misguided if we want to improve the state of reproducible research. This focus can be harmful --- we should not force code to be submitted. Due to the lack of evidence for actions taken today, we argue that it is clear that conferences must do more to encourage and reward the study of reproducibility itself, so that we can learn what actions should be taken.

Chat is not available.