Skip to yearly menu bar Skip to main content


Poster
in
Workshop: VerifAI: AI Verification in the Wild

Neural Abstract Interpretation

Shaurya Gomber · Gagandeep Singh


Abstract: Abstract interpretation is a widely used method for the formal analysis and verification of programs and neural networks. However, designing efficient abstract transformers for widely-used expressive domains such as Octagon and Polyhedra is challenging as one needs to carefully balance the fundamental trade-off between the cost, soundness, and precision of the transformer for downstream tasks. Further, scalable implementations involve intricate performance optimizations like Octagon and Polyhedra decomposition. Given the inherent complexity of abstract transformers and the proven capability of neural networks to effectively approximate complex functions, we envision and propose the concept of $\textit{Neural Abstract Transformers}$: neural networks that serve as abstract transformers. The proposed Neural Abstract Interpretation ($\texttt{NAI}$) framework provides supervised and unsupervised methods to learn efficient neural transformers $\textit{automatically}$, which reduces development costs. We instantiate the $\texttt{NAI}$ framework for two widely used numerical domains: Interval and Octagon. Evaluations on these domains demonstrate the effectiveness of the $\texttt{NAI}$ framework to learn sound and precise neural transformers. An added advantage of our technique is that neural transformers are differentiable, unlike hand-crafted alternatives. As an example, we showcase how this differentiability allows framing invariant generation as a learning problem, enabling neural transformers to generate valid octagonal invariants for a program.

Chat is not available.