Poster
in
Workshop: VerifAI: AI Verification in the Wild
Verifying Omega-regular Properties of Neural Network-Controlled Systems via Proof Certificates
Peixin Wang · Jianhao Bai · Dapeng Zhi · Min Zhang · Luke Ong
Abstract:
With the recent advances of deep reinforcement learning techniques, nowadays in control systems, neural networks are widely adopted as control policies. However, new concerns about whether the temporal behaviors of neural network-controlled systems (NNCSs) are consistent with user-defined specifications are caused when NNCSs are deployed in the real world.In this work, we consider formal verification of temporal properties including linear temporal logic (LTL) and more generally, $\omega$-regular properties in NNCSs, which leverages proof certificates whose existence can certify that NNCSs satisfy the properties. Concretely, given an NNCS and an $\omega$-regular property, (i) we formally verify the satisfaction of the property; (ii) when the NNCS operates in a noisy environment and becomes stochastic, we verify whether the NNCS satisfies the property almost surely (i.e., with probability $1$), both of which are achieved via closure certificates that are real-valued functions over pairs of states of NNCSs.We build our approaches into a prototype and showcase its efficacy on several popular benchmarks. Further results about quantitative verification in stochastic NNCSs are also outlined in this work.
Chat is not available.
Successful Page Load