Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Agent Learning in Open-Endedness

An Empirical Investigation of Mutual Information Skill Learning

Faisal Mohamed · Benjamin Eysenbach · Ruslan Salakhutdinov


Abstract:

Unsupervised skill learning methods are a form of unsupervised pre-training for reinforcement learning (RL) that has the potential to improve the sample efficiency of solving downstream tasks. Prior work has proposed several methods for unsupervised skill discovery based on mutual information (MI) objectives, with different methods varying in how this mutual information is estimated and optimized. This paper studies how different skill learning algorithms and their key design decisions affect the sample efficiency of solving downstream tasks. Our key findings are that the sample efficiency of downstream adaptation under off-policy backbones is better than their on-policy counterparts. In contrast, on-policy backbones resulted in better state coverage, moreover, regularizing the discriminator gave better results, and careful choice of the mutual information lower bound and discriminator architecture yielded significant improvements on downstream tasks, also we showed empirically that the learned representations during the pre-training corresponded to the controllable aspects of the environment.

Chat is not available.