Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Setting up ML Evaluation Standards to Accelerate Progress

A Survey On Uncertainty Toolkits For Deep Learning

Maximilian Pintz · Joachim Sicking · Maximilian Poretschkin · Maram Akila


Abstract:

The success of deep learning (DL) fostered the creation of unifying frameworks such as tensorflow or pytorch as much as it was driven by their creation in return.Having common building blocks facilitates the exchange of, e.g., models or concepts and makes developments easier replicable. Nonetheless, robust and reliable evaluation and assessment of DL models has often proven challenging. This is at odds with their increasing safety relevance, which recently culminated in the field of “trustworthy ML”. We believe that, among others, further unification of evaluation and safeguarding methodologies in terms of toolkits, i.e. small and specialized framework derivatives, might positively impact problems of trustworthiness as well as reproducibility. To this end, we present the first survey on toolkits for uncertainty estimation (UE) in DL, as UE forms a cornerstone in assessing model reliability.We investigate 11 toolkits with respect to modeling and evaluation capabilities, providing an in-depth comparison for the 3 most promising ones, namely Pyro, Tensorflow Probability, and Uncertainty Quantification 360. While the first two provide a large degree of flexibility and seamless integration into their respective framework, the last one has the larger methodological scope.

Chat is not available.