Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation

Yannick Hogewind · Thiago D. Simão · Tal Kachman · Nils Jansen

MH1-2-3-4 #119

Keywords: [ Reinforcement Learning ] [ reinforcement learning ] [ POMDP ] [ MDP ] [ safe reinforcement learning ] [ partially observable Markov decision process ] [ constrained markov decision process ] [ safety ]


Abstract:

We address the problem of safe reinforcement learning from pixel observations. Inherent challenges in such settings are (1) a trade-off between reward optimization and adhering to safety constraints, (2) partial observability, and (3) high-dimensional observations. We formalize the problem in a constrained, partially observable Markov decision process framework, where an agent obtains distinct reward and safety signals. To address the curse of dimensionality, we employ a novel safety critic using the stochastic latent actor-critic (SLAC) approach. The latent variable model predicts rewards and safety violations, and we use the safety critic to train safe policies. Using well-known benchmark environments, we demonstrate competitive performance over existing approaches regarding computational requirements, final reward return, and satisfying the safety constraints.

Chat is not available.