Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Roadmap to Never-Ending RL

Fast Inference and Transfer of Compositional Task Structure for Few-shot Task Generalization

Sungryull Sohn · Hyunjae Woo · Jongwook Choi · Izzeddin Gur · Aleksandra Faust · Honglak Lee


Abstract:

We propose a novel method that can learn a prior model of task structure from the training tasks and transfer it to the unseen tasks for fast adaptation. We formulate this as a few-shot reinforcement learning problem where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. Instead of directly inferring an unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) infers the common task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing. To this end, we propose to model the prior sampling and posterior update for the subtask graph inference. Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks than various existing algorithms such as meta reinforcement learning, hierarchical reinforcement learning, and other heuristic agents.

Chat is not available.