Attacks on state-of-the-art face recognition using attentional adversarial attack generative network
- PDF / 3,844,537 Bytes
- 21 Pages / 439.642 x 666.49 pts Page_size
- 19 Downloads / 214 Views
Attacks on state-of-the-art face recognition using attentional adversarial attack generative network Lu Yang1 · Qing Song1 · Yingqi Wu1 Received: 10 February 2020 / Revised: 9 August 2020 / Accepted: 12 August 2020 / © The Author(s) 2020
Abstract With the broad use of face recognition, its weakness gradually emerges that it is able to be attacked. Therefore, it is very important to study how face recognition networks are subject to attacks. Generating adversarial examples is an effective attack method, which misleads the face recognition system through obfuscation attack (rejecting a genuine subject) or impersonation attack (matching to an impostor). In this paper, we introduce a novel GAN, Attentional Adversarial Attack Generative Network (A3 GN ), to generate adversarial examples that mislead the network to identify someone as the target person not misclassify inconspicuously. For capturing the geometric and context information of the target person, this work adds a conditional variational autoencoder and attention modules to learn the instance-level correspondences between faces. Unlike traditional two-player GAN, this work introduces a face recognition network as the third player to participate in the competition between generator and discriminator which allows the attacker to impersonate the target person better. The generated faces which are hard to arouse the notice of onlookers can evade recognition by state-of-the-art networks and most of them are recognized as the target person. Keywords Face recognition · Generative adversarial networks · Adversarial attack
1 Introduction Neural networks are widely used in different tasks in the society which is profoundly changing our life [15, 20, 61]. A good algorithm, adequate training data, and computing power Qing Song
[email protected] Lu Yang [email protected] Yingqi Wu [email protected] 1
Pattern Recognition and Intelligence Vision Lab, Beijing University of Posts and Telecommunications, Beijing, China
Multimedia Tools and Applications
make neural networks supersede humans in many tasks, especially face recognition tasks [9, 21, 61]. Face recognition can be used to determine which one the face images belong to or whether the two face images belong to the same one. Applications based on this technology are gradually adopted in some important tasks, such as identity authentication in a railway station and for payment. Unfortunately, it has been shown that face recognition networks can be deceived inconspicuously by mildly changing inputs maliciously. The changed inputs are named adversarial examples that implement adversarial attack on networks [50, 51]. Szegedy et al. [57] present that adversarial attack can be implemented by applying an imperceptible perturbation which is hard to be observed for human eyes for the first time. Following the work of Szegedy, many works focus on how to craft adversarial examples to attack neural networks [12, 29, 38, 53]. Neural networks are gradually under suspicion. The works on adversarial attack can promote the develo
Data Loading...