Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Causal Confusion and Reward Misidentification in Preference-Based Reward Learning

Jeremy Tien · Zhiyang He · Zackory Erickson · Anca Dragan · Daniel Brown

MH1-2-3-4 #97

Keywords: [ Social Aspects of Machine Learning ] [ robustness ] [ reward learning ] [ preference-based learning ]


Abstract:

Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, we focus on a systematic study of causal confusion and reward misidentification when learning from preferences. In particular, we perform a series of sensitivity and ablation analyses on several benchmark domains where rewards learned from preferences achieve minimal test error but fail to generalize to out-of-distribution states---resulting in poor policy performance when optimized. We find that the presence of non-causal distractor features, noise in the stated preferences, and partial state observability can all exacerbate reward misidentification. We also identify a set of methods with which to interpret misidentified learned rewards. In general, we observe that optimizing misidentified rewards drives the policy off the reward's training distribution, resulting in high predicted (learned) rewards but low true rewards. These findings illuminate the susceptibility of preference learning to reward misidentification and causal confusion---failure to consider even one of many factors can result in unexpected, undesirable behavior.

Chat is not available.