Skip to yearly menu bar Skip to main content


Poster

RODE: Learning Roles to Decompose Multi-Agent Tasks

Tonghan Wang · Tarun Gupta · Anuj Mahajan · Bei Peng · Shimon Whiteson · Chongjie Zhang

Keywords: [ Multi-Agent Transfer Learning ] [ Hierarchical Multi-Agent Learning ] [ Role-Based Learning ] [ multi-agent reinforcement learning ]


Abstract:

Role-based learning holds the promise of achieving scalable multi-agent learning by decomposing complex tasks using roles. However, it is largely unclear how to efficiently discover such a set of roles. To solve this problem, we propose to first decompose joint action spaces into restricted role action spaces by clustering actions according to their effects on the environment and other agents. Learning a role selector based on action effects makes role discovery much easier because it forms a bi-level learning hierarchy: the role selector searches in a smaller role space and at a lower temporal resolution, while role policies learn in significantly reduced primitive action-observation spaces. We further integrate information about action effects into the role policies to boost learning efficiency and policy generalization. By virtue of these advances, our method (1) outperforms the current state-of-the-art MARL algorithms on 9 of the 14 scenarios that comprise the challenging StarCraft II micromanagement benchmark and (2) achieves rapid transfer to new environments with three times the number of agents. Demonstrative videos can be viewed at https://sites.google.com/view/rode-marl.

Chat is not available.