Skip to yearly menu bar Skip to main content


Poster

LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints

Weidi Xu · Jingwei Wang · Lele Xie · Jianshan He · Hongting Zhou · Taifeng Wang · Xiaopei Wan · Jingdong Chen · Chao Qu · Wei Chu

Halle B #216
[ ] [ Project Page ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, which performs mean-field variational inference over a Markov Logic Network (MLN). It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations greatly mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over images, graphs, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.

Live content is unavailable. Log in and register to view live content