Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Modular, Collaborative and Decentralized Deep Learning

Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning

Anthony Kobanda · Rémy Portelas · odalric-ambrym maillard · Ludovic Denoyer


Abstract:

We consider a Continual Reinforcement Learning setup, where a learning agent must continuously adapt to new tasks while retaining previously acquired skill sets, with a focus on the challenge of avoiding forgetting past gathered knowledge and ensuring scalability with the growing number of tasks. Such issues prevail in autonomous robotics and video game simulations, notably for navigation tasks prone to topological or kinematic changes. To address these issues, we introduceHiSPO, a novel hierarchical framework designed specifically for continual learningin navigation settings from offline data. Our method leverages distinct policysubspaces of neural networks to enable flexible and efficient adaptation to new tasks while preserving existing knowledge.We demonstrate, through a carefulexperimental study, the effectiveness of our method in both classical MuJoComazeenvironments and complex video game-like navigation simulations, showcasingcompetitive performances and satisfying adaptability with respect to classical continual learningmetrics, in particular regarding the memory usage and efficiency.

Chat is not available.