Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reincarnating Reinforcement Learning

Revisiting Behavior Regularized Actor-Critic

Denis Tarasov · Vladislav Kurenkov · Alexander Nikulin · Sergey Kolesnikov


Abstract:

In recent years, significant advancements have been made in offline reinforcement learning, with a growing number of novel algorithms of varying degrees of complexity. Despite this progress, the significance of specific design choices and the application of common deep learning techniques remains unexplored. In this work, we demonstrate that it is possible to achieve state-of-the-art performance on the D4RL benchmark through a simple set of modifications to the minimalist offline RL approach and careful hyperparameter search. Furthermore, our ablations emphasize the importance of minor design choices and hyperparameter tuning while highlighting the untapped potential of using deep learning techniques in offline reinforcement learning.

Chat is not available.