Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

SpeedyZero: Mastering Atari with Limited Data and Time

Yixuan Mei · Jiaxuan Gao · Weirui Ye · Shaohuai Liu · Yang Gao · Yi Wu

MH1-2-3-4 #104

Keywords: [ Reinforcement Learning ] [ distributed training ] [ model-based reinforcement learning ] [ Reinforcement Learning System ]


Abstract:

Many recent breakthroughs of deep reinforcement learning (RL) are mainly built upon large-scale distributed training of model-free methods using millions to billions of samples. On the other hand, state-of-the-art model-based RL methods can achieve human-level sample efficiency but often take a much longer over all training time than model-free methods. However, high sample efficiency and fast training time are both important to many real-world applications. We develop SpeedyZero, a distributed RL system built upon a state-of-the-art model-based RL method, EfficientZero, with a dedicated system design for fast distributed computation. We also develop two novel algorithmic techniques, Priority Refresh and Clipped LARS, to stabilize training with massive parallelization and large batch size. SpeedyZero maintains on-par sample efficiency compared with EfficientZero while achieving a 14.5X speedup in wall-clock time, leading to human-level performances on the Atari benchmark within 35 minutes using only 300k samples. In addition, we also present an in-depth analysis on the fundamental challenges in further scaling our system to bring insights to the community.

Chat is not available.