Skip to yearly menu bar Skip to main content


Poster

The Ingredients of Real World Robotic Reinforcement Learning

Abhishek Gupta · Sergey Levine · Dhruv Shah · Kristian Hartikainen · Justin Yu · Henry Zhu · Avi Singh · Vikash Kumar


Abstract:

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.

Chat is not available.