Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neurosymbolic Generative Models (NeSy-GeMs)

A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference

Emile van Krieken · Thiviyan Thanapalasingam · Jakub Tomczak · Frank van Harmelen · Annette ten Teije


Abstract:

We study the problem of combining neural networks with symbolic reasoning. Frameworks for Probabilistic Neurosymbolic Learning (PNL) like DeepProbLog perform exponential-time exact inference that is limited in scalability. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses deep generative modelling for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time; 2) is trained using data generated by background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time. Our experiments show that A-NeSI is the first end-to-end method to scale the Multi-digit MNISTAdd benchmark to sums of 15 MNIST digits.

Chat is not available.