Verifying ReLU Neural Networks from a Model Checking Perspective

  • PDF / 790,616 Bytes
  • 17 Pages / 595 x 842 pts (A4) Page_size
  • 18 Downloads / 320 Views

DOWNLOAD

REPORT


Verifying ReLU Neural Networks from a Model Checking Perspective Wan-Wei Liu1 , Senior Member, CCF, Fu Song2,3 , Senior Member, CCF, Tang-Hao-Ran Zhang1 , and Ji Wang1,4 , Distinguished Member, CCF 1

College of Computer Science, National University of Defense Technology, Changsha 410073, China

2

School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China

3

Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China

4

State Key Laboratory for High Performance Computing, Changsha 410073, China

E-mail: [email protected]; [email protected]; {zhangthr, Jiwang}@nudt.edu.cn Received April 12, 2020; revised October 6, 2020. Abstract Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for the Σ2 ∪ Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach. Keywords

1

model checking, rectified linear unit neural (ReLU) network, temporal logic

Introduction

Neural network (NN, for short) [1] , is considered to be an extremely important computing model in artificial intelligence domain. It has attracted more and more attention in recent years. It acts as a classifier, which takes an input from the feature space, and tells the corresponding class to which the input belongs. Though such kind of computing models has been developed for decades, and numerous variants have been presented (e.g., recurrent neural networks [2] and convolutional neural networks [3] ), for the most cases, neural networks are working as black-boxes. This makes the behavior of an NN extremely difficult to predict, and therefore one can barely give a rigorous description of such a network. A neural network mimics what the human’s brain works in reality, and its structure is designed in a bionic way — it consists of a series of perceptrons (or neurons),

and each perceptron has several inputs (dendrites) and one output (axon). The approach to modeling a perceptron is to consider it as a function from inputs to the output, and such a function is composed of a linear part and a nonlinear part. The linear part is a weighted summation of the inputs, whereas the nonlinear part is in general an activation function. Then, the perceptrons form a network via interconnection. In general, to simpl