Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Mind the Gap: Offline Policy Optimization for Imperfect Rewards

Jianxiong Li · Xiao Hu · Haoran Xu · Jingjing Liu · Xianyuan Zhan · Qing-Shan Jia · Ya-Qin Zhang

Keywords: [ Reinforcement Learning ] [ offline policy optimization ] [ reward gap ] [ imperfect rewards ]


Abstract:

Reward function is essential in reinforcement learning (RL), serving as the guiding signal to incentivize agents to solve given tasks, however, is also notoriously difficult to design. In many cases, only imperfect rewards are available, which inflicts substantial performance loss for RL agents. In this study, we propose a unified offline policy optimization approach, \textit{RGM (Reward Gap Minimization)}, which can smartly handle diverse types of imperfect rewards. RGM is formulated as a bi-level optimization problem: the upper layer optimizes a reward correction term that performs visitation distribution matching w.r.t. some expert data; the lower layer solves a pessimistic RL problem with the corrected rewards. By exploiting the duality of the lower layer, we derive a tractable algorithm that enables sampled-based learning without any online interactions. Comprehensive experiments demonstrate that RGM achieves superior performance to existing methods under diverse settings of imperfect rewards. Further, RGM can effectively correct wrong or inconsistent rewards against expert preference and retrieve useful information from biased rewards. Code is available at https://github.com/Facebear-ljx/RGM.

Chat is not available.