Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Roadmap to Never-Ending RL

RL for Autonomous Mobile Manipulation with Applications to Room Cleaning

Charles Sun · Coline Devin · Abhishek Gupta · Glen Berseth · Sergey Levine


Abstract:

In this work, we specifically study how a robot can autonomously learn to clean a room by collecting objects off the ground and putting them into a basket. This task exemplifies the coordination needed between manipulation and navigation: the robot needs to navigate to objects in order to attempt to grasp them. Our goal is to enable a robot to learn this task autonomously under realistic settings, without any environment instrumentation, human intervention, or access to privileged information, such as maps, objects positions, or a global view of the environment. While reinforcement learning (RL) from images provides a general solution to learning tasks in theory, in practice most successful uses of RL rely on instrumented setups, hand-engineered state tracking, and/or human provided resets. We propose a novel learning system, ReALMM, that avoids the need for these by separating grasping and navigation policies at the architecture level for efficient learning, but still trains them together from the same sparse grasp-success signal. ReALMM also avoids the needs for externally providing resets by using an autonomous pseudo-resetting behavior. We show that with ReALMM, a robot can learn to navigate and clean up a room completely autonomously, without any external supervision.

Chat is not available.