Skip to yearly menu bar Skip to main content


Poster

Hierarchical Reinforcement Learning by Discovering Intrinsic Options

Jesse Zhang · Haonan Yu · Wei Xu

Virtual

Keywords: [ unsupervised skill discovery ] [ hierarchical reinforcement learning ] [ options ] [ exploration ] [ reinforcement learning ]


Abstract:

We propose a hierarchical reinforcement learning method, HIDIO, that can learn task-agnostic options in a self-supervised manner while jointly learning to utilize them to solve sparse-reward tasks. Unlike current hierarchical RL approaches that tend to formulate goal-reaching low-level tasks or pre-define ad hoc lower-level policies, HIDIO encourages lower-level option learning that is independent of the task at hand, requiring few assumptions or little knowledge about the task structure. These options are learned through an intrinsic entropy minimization objective conditioned on the option sub-trajectories. The learned options are diverse and task-agnostic. In experiments on sparse-reward robotic manipulation and navigation tasks, HIDIO achieves higher success rates with greater sample efficiency than regular RL baselines and two state-of-the-art hierarchical RL methods. Code at: https://github.com/jesbu1/hidio.

Chat is not available.