Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Human-level Atari 200x faster

Steven Kapturowski · VĂ­ctor Campos · Ray Jiang · Nemanja Rakicevic · Hado van Hasselt · Charles Blundell · Adria Puigdomenech Badia

MH1-2-3-4 #147

Keywords: [ reinforcement learning ] [ exploration ] [ Data-efficiency ] [ Off-policy ]


Abstract:

The task of building general agents that perform well over a wide range of tasks has been an important goal in reinforcement learning since its inception. The problem has been subject of research of a large body of work, with performance frequently measured by observing scores over the wide range of environments contained in the Atari 57 benchmark. Agent57 was the first agent to surpass the human benchmark on all 57 games, but this came at the cost of poor data-efficiency, requiring nearly 80 billion frames of experience to achieve. Taking Agent57 as a starting point, we employ a diverse set of strategies to achieve a 200-fold reduction of experience needed to outperform the human baseline, within our novel agent MEME. We investigate a range of instabilities and bottlenecks we encountered while reducing the data regime, and propose effective solutions to build a more robust and efficient agent. We also demonstrate competitive performance with high-performing methods such as Muesli and MuZero. Our contributions aim to achieve faster propagation of learning signals related to rare events, stabilize learning under differing value scales, improve the neural network architecture, and make updates more robust under a rapidly-changing policy.

Chat is not available.