Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Adversarial Imitation Learning with Preferences

Aleksandar Taranovic · Andras Kupcsik · Niklas Freymuth · Gerhard Neumann

MH1-2-3-4 #99

Keywords: [ Reinforcement Learning ] [ adversarial imitation learning ] [ preference learning ] [ learning from demonstration ]


Abstract:

Designing an accurate and explainable reward function for many Reinforcement Learning tasks is a cumbersome and tedious process. Instead, learning policies directly from the feedback of human teachers naturally integrates human domain knowledge into the policy optimization process. However, different feedback modalities, such as demonstrations and preferences, provide distinct benefits and disadvantages. For example, demonstrations convey a lot of information about the task but are often hard or costly to obtain from real experts while preferences typically contain less information but are in most cases cheap to generate. However, existing methods centered around human feedback mostly focus on a single teaching modality, causing them to miss out on important training data while making them less intuitive to use.In this paper we propose a novel method for policy learning that incorporates two different feedback types, namely \emph{demonstrations} and \emph{preferences}. To this end, we make use of the connection between discriminator training and density ratio estimation to incorporate preferences into the popular Adversarial Imitation Learning paradigm. This insight allows us to express loss functions over both demonstrations and preferences in a unified framework.Besides expert demonstrations, we are also able to learn from imperfect ones and combine them with preferences to achieve improved task performance.We experimentally validate the effectiveness of combining both preferences and demonstrations on common benchmarks and also show that our method can efficiently learn challenging robot manipulation tasks.

Chat is not available.