No Arabic abstract
Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental BMC based on interval analysis to compute boundaries from a neurons input. Then, the latter are propagated to effectively find a neurons output since it is the input of the next one. This paper describes the first bit-precise symbolic verification framework to reason over actual implementations of ANNs in CUDA, based on invariant inference, therefore providing further guarantees about finite-precision arithmetic and its rounding errors, which are routinely ignored in the existing literature. We have implemented the proposed approach on top of the efficient SMT-based bounded model checker (ESBMC), and its experimental results show that it can successfully verify safety properties, in actual implementations of ANNs, and generate real adversarial cases in MLPs. Our approach was able to verify and produce adversarial examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods. Although our verification time is higher than existing approaches, our methodology can consider fixed-point implementation aspects that are disregarded by the state-of-the-art verification methodologies.
The analysis of parametrised systems is a growing field in verification, but the analysis of parametrised probabilistic systems is still in its infancy. This is partly because it is much harder: while there are beautiful cut-off results for non-stochastic systems that allow to focus only on small instances, there is little hope that such approaches extend to the quantitative analysis of probabilistic systems, as the probabilities depend on the size of a system. The unicorn would be an automatic transformation of a parametrised system into a formula, which allows to plot, say, the likelihood to reach a goal or the expected costs to do so, against the parameters of a system. While such analysis exists for narrow classes of systems, such as waiting queues, we aim both lower---stepwise exploring the parameter space---and higher---considering general systems. The novelty is to heavily exploit the similarity between instances of parametrised systems. When the parameter grows, the system for the smaller parameter is, broadly speaking, present in the larger system. We use this observation to guide the elegant state-elimination method for parametric Markov chains in such a way, that the model transformations will start with those parts of the system that are stable under increasing the parameter. We argue that this can lead to a very cheap iterative way to analyse parametric systems, show how this approach extends to reconfigurable systems, and demonstrate on two benchmarks that this approach scales.
This paper presents two formal models of the Data Encryption Standard (DES), a first using the international standard LOTOS, and a second using the more recent process calculus LNT. Both models encode the DES in the style of asynchronous circuits, i.e., the data-flow blocks of the DES algorithm are represented by processes communicating via rendezvous. To ensure correctness of the models, several techniques have been applied, including model checking, equivalence checking, and comparing the results produced by a prototype automatically generated from the formal model with those of existing implementations of the DES. The complete code of the models is provided as appendices and also available on the website of the CADP verification toolbox.
Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness training (e.g., enhancing fairness through augmented training). In this work, we propose an approach to formally verify neural networks against fairness, with a focus on independence-based fairness such as group fairness. Our method is built upon an approach for learning Markov Chains from a user-provided neural network (i.e., a feed-forward neural network or a recurrent neural network) which is guaranteed to facilitate sound analysis. The learned Markov Chain not only allows us to verify (with Probably Approximate Correctness guarantee) whether the neural network is fair or not, but also facilities sensitivity analysis which helps to understand why fairness is violated. We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness. Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient.
This report summarizes the second International Verification of Neural Networks Competition (VNN-COMP 2021), held as a part of the 4th Workshop on Formal Methods for ML-Enabled Autonomous Systems that was collocated with the 33rd International Conference on Computer-Aided Verification (CAV). Twelve teams participated in this competition. The goal of the competition is to provide an objective comparison of the state-of-the-art methods in neural network verification, in terms of scalability and speed. Along this line, we used standard formats (ONNX for neural networks and VNNLIB for specifications), standard hardware (all tools are run by the organizers on AWS), and tool parameters provided by the tool authors. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this competition.
This paper is concerned with the utilization of deterministically modeled chemical reaction networks for the implementation of (feed-forward) neural networks. We develop a general mathematical framework and prove that the ordinary differential equations (ODEs) associated with certain reaction network implementations of neural networks have desirable properties including (i) existence of unique positive fixed points that are smooth in the parameters of the model (necessary for gradient descent), and (ii) fast convergence to the fixed point regardless of initial condition (necessary for efficient implementation). We do so by first making a connection between neural networks and fixed points for systems of ODEs, and then by constructing reaction networks with the correct associated set of ODEs. We demonstrate the theory by constructing a reaction network that implements a neural network with a smoothed ReLU activation function, though we also demonstrate how to generalize the construction to allow for other activation functions (each with the desirable properties listed previously). As there are multiple types of networks utilized in this paper, we also give a careful introduction to both reaction networks and neural networks, in order to disambiguate the overlapping vocabulary in the two settings and to clearly highlight the role of each networks properties.