Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Roadmap to Never-Ending RL

What is Going on Inside Recurrent Meta Reinforcement Learning Agents?

Safa Alver · Doina Precup


Abstract:

Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of ``learning a learning algorithm''. After being trained on a pre-specified task distribution, the learned weights of the agent's RNN are able to implement an efficient learning algorithm through their activity dynamics, which allows the agent to quickly solve new tasks sampled from the same distribution. However, due to the black-box nature of these agents, the way in which they work is not yet fully understood. In this study, we shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework. We hypothesize that the learned activity dynamics is acting as belief states for such agents. Several illustrative experiments suggest that this hypothesis is true, and that recurrent meta-RL agents can be viewed as agents that learn to act optimally in partially observable environments consisting of multiple related tasks. This view helps in understanding their failure cases and some interesting model-based results recently reported in the literature.

Chat is not available.