Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Self-Improving Foundation Models Without Human Supervision

Aviary: Training Language Agents on Challenging Scientific Tasks

Siddharth Narayanan · James Braza · Ryan-Rhys Griffiths · MANVITHA PONNAPATI · Albert Bou · Jon Laurent · Ori Kabeli · Geemi Wellawatte · Sam Cox · Samuel Rodriques · Andrew White

Keywords: [ AI4Science ] [ language agents ] [ self-improving foundation models ]


Abstract:

Solving complex real-world tasks requires cycles of actions and observations. This is particularly true in science, where tasks require many cycles of hypothesis, experimentation, and analysis. Language agents hold promise for automating intellectual tasks in science because they can interact with tools via natural language or code. However, their flexibility creates conceptual and practical challenges for software implementations, since agents may comprise non-standard components such as internal reasoning, planning, tool usage, as well as the inherent stochasticity of temperature-sampled language models. Here, we introduce Aviary, an extensible gymnasium for language agents. We formalize agents as policies solving language-grounded partially observable Markov decision processes, which we term language decision processes. We then implement five environments, including three challenging scientific environments: (1) manipulating DNA constructs for molecular cloning, (2) answering research questions by accessing scientific literature, and (3) engineering protein stability. These environments were selected for their focus on multi-step reasoning and their relevance to contemporary biology research. Finally, with online training and inference-time compute scaling, we show that language agents based on open-source, non-frontier LLMs can match and exceed both frontier LLM agents and human experts on multiple tasks at up to 100x lower inference cost.

Chat is not available.