Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Planning with Sequence Models through Iterative Energy Minimization

Hongyi Chen · Yilun Du · Yiye Chen · Joshua B Tenenbaum · Patricio Vela

MH1-2-3-4 #61

Keywords: [ Reinforcement Learning ] [ reinforcement learning ] [ decision transformer ] [ planning ] [ language model ]


Abstract:

Recent works have shown that language modeling can be effectively used to train reinforcement learning (RL) policies. However, the success of applying existing language models to planning, in which we wish to obtain a trajectory of actions to reach some goal, is less straightforward. The typical autoregressive generation procedures of language models preclude sequential refinement of earlier steps, which limits the effectiveness of a predicted plan. In this paper, we suggest an approach towards integrating planning with language models based on the idea of iterative energy minimization, and illustrate how such a procedure leads to improved RL performance across different tasks. We train a masked language model to capture an implicit energy function over trajectories of actions, and formulate planning as finding a trajectory of actions with minimum energy. We illustrate how this procedure enables improved performance over recent approaches across BabyAI and Atari environments. We further demonstrate unique benefits of our iterative optimization procedure, involving new task generalization, test-time constraints adaptation, and the ability to compose plans together. Project webpage: https://hychen-naza.github.io/projects/LEAP/index.html

Chat is not available.