Skip to yearly menu bar Skip to main content


Poster

Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning

Mirco Mutti · Riccardo De Santi · Marcello Restelli · Alexander Marx · Giorgia Ramponi

Halle B #153
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Posterior sampling allows exploitation of prior knowledge on the environment's transition dynamics to improve the sample efficiency of reinforcement learning. The prior is typically specified as a class of parametric distributions, the design of which can be cumbersome in practice, often resulting in the choice of uninformative priors. In this work, we propose a novel posterior sampling approach in which the prior is given as a (partial) causal graph over the environment's variables. The latter is often more natural to design, such as listing known causal dependencies between biometric features in a medical treatment study. Specifically, we propose a hierarchical Bayesian procedure, called C-PSRL, simultaneously learning the full causal graph at the higher level and the parameters of the resulting factored dynamics at the lower level. We provide an analysis of the Bayesian regret of C-PSRL that explicitly connects the regret rate with the degree of prior knowledge. Our numerical evaluation conducted in illustrative domains confirms that C-PSRL strongly improves the efficiency of posterior sampling with an uninformative prior while performing close to posterior sampling with the full causal graph.

Live content is unavailable. Log in and register to view live content