Advances in verification of ReLU neural networks
- PDF / 832,150 Bytes
- 44 Pages / 439.37 x 666.142 pts Page_size
- 102 Downloads / 217 Views
Advances in verification of ReLU neural networks Ansgar Rössig1 · Milena Petkovic2 Received: 1 August 2019 / Accepted: 5 September 2020 © The Author(s) 2020
Abstract We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behaviour of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu system and the MNIST handwritten digit data set. The results indicate that our approach is very competitive with others, i.e. it outperforms the solvers of Bunel et al. (in: Bengio, Wallach, Larochelle, Grauman, Cesa-Bianchi, Garnett (eds) Advances in neural information processing systems (NIPS 2018), 2018) and Reluplex (Katz et al. in: Computer aided verification—29th international conference, CAV 2017, Heidelberg, Germany, July 24–28, 2017, Proceedings, 2017). In comparison to the solvers ReluVal (Wang et al. in: 27th USENIX security symposium (USENIX Security 18), USENIX Association, Baltimore, 2018a) and Neurify (Wang et al. in: 32nd Conference on neural information processing systems (NIPS), Montreal, 2018b), the number of necessary branchings is much smaller. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron. Keywords Neural networks verification · ReLU · MIP
B
Milena Petkovic [email protected] Ansgar Rössig [email protected]
1
Institute for Mathematics, Software and Algorithms for Discrete Optimization, Technische Universität Berlin, Straße des 17. Juni 136, Berlin, Germany
2
Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany
123
Journal of Global Optimization
1 Introduction During the last few years, various approaches have been presented that aim to provide formal guarantees on the behaviour of neural networks. The use of such verification methods may be crucial to enable the secure and certified application of neural networks for safety-critical tasks. Moreover, based on first results of [32], awareness was raised that neural networks are prone to fail on so called adversarial examples. These are created by small perturbations of input samples, such that the changes are (almost) imperceptible to humans. However, these perturbations are often sufficient to make a neural network fail on the input sample. The existence of such adversarial examples can be ruled out by m
Data Loading...