Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

In-sample Actor Critic for Offline Reinforcement Learning

Hongchang Zhang · Yixiu Mao · Boyuan Wang · Shuncheng He · Yi Xu · Xiangyang Ji

Keywords: [ Reinforcement Learning ] [ offline reinforcement learning ]


Abstract:

Offline reinforcement learning suffers from out-of-distribution issue and extrapolation error. Most methods penalize the out-of-distribution state-action pairs or regularize the trained policy towards the behavior policy but cannot guarantee to get rid of extrapolation error. We propose In-sample Actor Critic (IAC) which utilizes sampling-importance resampling to execute in-sample policy evaluation. IAC only uses the target Q-values of the actions in the dataset to evaluate the trained policy, thus avoiding extrapolation error. The proposed method performs unbiased policy evaluation and has a lower variance than importance sampling in many cases. Empirical results show that IAC obtains competitive performance compared to the state-of-the-art methods on Gym-MuJoCo locomotion domains and much more challenging AntMaze domains.

Chat is not available.