Agent Data Protocol
Yueqi Song · Ketan Ramaneti · Zaid Sheikh · Ziru Chen · Boyu Gou · Tianbao Xie · Yiheng Xu · Danyang Zhang · Apurva Gandhi · Fan Yang · Joseph Liu · Tianyue Ou · Zhihao Yuan · Frank F Xu · Shuyan Zhou · Xingyao Wang · Xiang Yue · Tao Yu · Huan Sun · Yu Su · Graham Neubig
Abstract
Public research results on large-scale supervised finetuning of AI agents remain relatively rare, since the collection of agent training data presents unique challenges. In this work, we argue that the bottleneck is not a lack of underlying data sources, but that a large variety of data is fragmented across heterogeneous formats, tools, and interfaces. To this end, we introduce the Agent Data Protocol (ADP), a light-weight representation language that serves as an "interlingua" between agent datasets in diverse formats and unified agent training pipelines downstream. The design of ADP is expressive enough to capture a large variety of tasks, including API/tool use, browsing, coding, software engineering, and general agentic workflows, while remaining simple to parse and train on without engineering at a per-dataset level. In experiments, we unified a broad collection of 13 existing agent training datasets into ADP format, and converted the standardized ADP data into training-ready formats for multiple agent frameworks. We performed supervised finetuning on the unified data, and demonstrated an average performance gain of $\sim$20\% over corresponding base models, and delivers state-of-the-art or near-SOTA performance on standard coding, browsing, tool use, and research benchmarks, without domain-specific tuning. All code and data are released publicly, in the hope that ADP could help lower the barrier to standardized, scalable, and reproducible agent training.
Successful Page Load