Skip to yearly menu bar Skip to main content


Poster

Planning from Pixels using Inverse Dynamics Models

Keiran Paster · Sheila McIlraith · Jimmy Ba

Keywords: [ goal-conditioned reinforcement learning ] [ model based reinforcement learning ] [ multi-task learning ] [ deep reinforcement learning ] [ deep learning ]


Abstract:

Learning dynamics models in high-dimensional observation spaces can be challenging for model-based RL agents. We propose a novel way to learn models in a latent space by learning to predict sequences of future actions conditioned on task completion. These models track task-relevant environment dynamics over a distribution of tasks, while simultaneously serving as an effective heuristic for planning with sparse rewards. We evaluate our method on challenging visual goal completion tasks and show a substantial increase in performance compared to prior model-free approaches.

Chat is not available.