Skip to yearly menu bar Skip to main content


Poster

To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination

Manley Roberts · Himanshu Thakur · Christine Herlihy · Colin White · Samuel Dooley

Halle B #71
[ ] [ Project Page ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data. Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities. In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination. By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data.

Live content is unavailable. Log in and register to view live content