Skip to yearly menu bar Skip to main content


Poster (Contributed)
in
Workshop: AI for Agent-Based Modelling (AI4ABM)

Interpretable Reinforcement Learning via Neural Additive Models for Inventory Management

Julien Siems


Abstract:

The COVID-19 pandemic has highlighted the importance of supply chains and the role of digital management to react to dynamic changes in the environment. In this work, we focus on developing \emph{dynamic} ordering policies for multi-echelon inventory optimization. Traditional inventory optimization methods aim to determine a \emph{static} reordering policy. Thus, these policies are not able to adjust to dynamic changes such as those observed during the COVID-19 crisis. On the other hand, conventional strategies offer the advantage of being interpretable, which is a crucial feature for supply chain managers in order to communicate decisions to their stakeholders. To address this limitation, we propose an interpretable reinforcement learning approach that aims to be as interpretable as the traditional static policies while being as flexible and environment-agnostic as other deep learning-based reinforcement learning solutions. We propose to use Neural Additive Models as an interpretable dynamic policy of a reinforcement learning agent, showing that this approach is competitive with a standard full connected policy. Finally, we use the interpretability property to gain insights into a complex ordering strategy for a simple, linear three-echelon inventory supply chain.

Chat is not available.