Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Learning Simultaneous Navigation and Construction in Grid Worlds

Wenyu Han · Haoran Wu · Eisuke Hirota · Alexander Gao · Lerrel Pinto · Ludovic Righetti · Chen Feng

Keywords: [ Reinforcement Learning ] [ localization ] [ representation learning ] [ deep reinforcement learning ] [ navigation ] [ Construction ]


Abstract:

We propose to study a new learning task, mobile construction, to enable an agent to build designed structures in 1/2/3D grid worlds while navigating in the same evolving environments. Unlike existing robot learning tasks such as visual navigation and object manipulation, this task is challenging because of the interdependence between accurate localization and strategic construction planning. In pursuit of generic and adaptive solutions to this partially observable Markov decision process (POMDP) based on deep reinforcement learning (RL), we designa Deep Recurrent Q-Network (DRQN) with explicit recurrent position estimation in this dynamic grid world. Our extensive experiments show that pre-training this position estimation module before Q-learning can significantly improve the construction performance measured by the intersection-over-union score, achieving the best results in our benchmark of various baselines including model-free and model-based RL, a handcrafted SLAM-based policy, and human players. Our code is available at: https://ai4ce.github.io/SNAC/.

Chat is not available.