Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Tiny Papers Poster Session 6

Toward Computationally Efficient Inverse Reinforcement Learning via Reward Shaping

Lauren H. Cooke · Harvey Klyne · David Bell · Cassidy Laidlaw · Milind Tambe · Finale Doshi-Velez

Halle B #302
[ ] [ Project Page ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Inverse reinforcement learning (IRL) is computationally challenging, with common approaches requiring the solution of multiple reinforcement learning (RL) sub-problems. This work motivates the use of potential-based reward shaping to reduce the computational burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will inspire future developments towards computationally efficient IRL.

Live content is unavailable. Log in and register to view live content