Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Risk-Aware Reinforcement Learning with Coherent Risk Measures and Non-linear Function Approximation

Thanh Steve Lam · Arun Verma · Bryan Kian Hsiang Low · Patrick Jaillet

Keywords: [ Theory ] [ Coherent Risk Measures ] [ Non-linear Function Approximation ] [ Risk-Aware Reinforcement Learning ]


Abstract:

We study the risk-aware reinforcement learning (RL) problem in the episodic finite-horizon Markov decision process with unknown transition and reward functions. In contrast to the risk-neutral RL problem, we consider minimizing the risk of having low rewards, which arise due to the intrinsic randomness of the MDPs and imperfect knowledge of the model. Our work provides a unified framework to analyze the regret of risk-aware RL policy with coherent risk measures in conjunction with non-linear function approximation, which gives the first sub-linear regret bounds in the setting. Finally, we validate our theoretical results via empirical experiments on synthetic and real-world data.

Chat is not available.