Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Guarded Policy Optimization with Imperfect Online Demonstrations

Zhenghai Xue · Zhenghao Peng · Quanyi Li · ZHIHAN LIU · Bolei Zhou

Keywords: [ Reinforcement Learning ] [ reinforcement learning ] [ imperfect demonstrations ] [ metadrive simulator ] [ guarded policy optimization ] [ shared control ]


Abstract:

The Teacher-Student Framework (TSF) is a reinforcement learning setting where a teacher agent guards the training of a student agent by intervening and providing online demonstrations. Assuming optimal, the teacher policy has the perfect timing and capability to intervene in the learning process of the student agent, providing safety guarantee and exploration guidance. Nevertheless, in many real-world settings it is expensive or even impossible to obtain a well-performing teacher policy. In this work, we relax the assumption of a well-performing teacher and develop a new method that can incorporate arbitrary teacher policies with modest or inferior performance. We instantiate an Off-Policy Reinforcement Learning algorithm, termed Teacher-Student Shared Control (TS2C), which incorporates teacher intervention based on trajectory-based value estimation. Theoretical analysis validates that the proposed TS2C algorithm attains efficient exploration and substantial safety guarantee without being affected by the teacher's own performance. Experiments on various continuous control tasks show that our method can exploit teacher policies at different performance levels while maintaining a low training cost. Moreover, the student policy surpasses the imperfect teacher policy in terms of higher accumulated reward in held-out testing environments. Code is available at https://metadriverse.github.io/TS2C.

Chat is not available.