Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Fundamental Limits in Formal Verification of Message-Passing Neural Networks

Marco Sälzer · Martin Lange

Keywords: [ Social Aspects of Machine Learning ] [ formal verification ] [ graph neural networks ]


Abstract:

Output reachability and adversarial robustness are among the most relevant safety properties of neural networks. We show that in the context of Message Passing Neural Networks (MPNN), a common Graph Neural Network (GNN) model, formal verification is impossible. In particular, we show that output reachability of graph-classifier MPNN, working over graphs of unbounded size, non-trivial degree and sufficiently expressive node labels, cannot be verified formally: thereis no algorithm that answers correctly (with yes or no), given an MPNN, whether there exists some valid input to the MPNN such that the corresponding output satisfies a given specification. However, we also show that output reachability and adversarial robustness of node-classifier MPNN can be verified formally when a limit onthe degree of input graphs is given a priori. We discuss the implications of these results, for the purpose ofobtaining a complete picture of the principle possibility to formally verify GNN, depending on the expressiveness of the involved GNN models and input-output specifications.

Chat is not available.