Unraveling robustness of deep face anti-spoofing models against pixel attacks

  • PDF / 2,526,350 Bytes
  • 18 Pages / 439.642 x 666.49 pts Page_size
  • 1 Downloads / 154 Views

DOWNLOAD

REPORT


Unraveling robustness of deep face anti-spoofing models against pixel attacks Naima Bousnina1 Khalid Minaoui1

· Lilei Zheng2 · Mounia Mikram3 · Sanaa Ghouzali4 ·

Received: 17 February 2020 / Revised: 20 August 2020 / Accepted: 7 October 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract In the last few decades, deep-learning-based face verification and recognition systems have had enormous success in solving complex security problems. However, it has been recently shown that such efficient frameworks are vulnerable to face-spoofing attacks, which has led researchers to build proficient anti-facial-spoofing (or liveness detection) models as an additional security layer. In response, increasingly challenging and tricky attacks have been launched to fool these anti-spoofing mechanisms. In this context, this paper presents the results of an analytical study on transfer-learning-based convolutional neural networks (CNNs) for face liveness detection and differential evolution-based adversarial attacks to evaluate the efficiency of face anti-spoofing classifiers against adversarial attacks. Specifically, experiments were conducted under different use-case scenarios on four face anti-spoofing databases to highlight practical criteria that can be used in the development of countermeasures to address face-spoofing issues. Keywords Face liveness detection · Spoofing attacks · Convolutional neural networks · Differential evolution · Deep learning

1 Introduction Facial biometrics consistently outperform other biometric modalities in a wide range of daily applications in terms of their reasonable recognition cost, convenience, and high levels of performance. As examples of the applicability of the approach, Lenovo, Asus, and Toshiba laptops now come with built-in face authentication webcams [38] and the Unique Identification Authority of India (UIDAI) facial recognition system is used to identify Indian  Naima Bousnina

[email protected]

Extended author information available on the last page of the article.

Multimedia Tools and Applications

residents [56]. As the general public becomes increasingly acquainted with facial authentication systems, their loopholes are being explored. The human face can be easily acquired and duplicated by attackers who can obtain facial images or videos from social networks and use them to generate artificial models, which can then be used to deceive face authentication systems in an attack mode referred to as face spoofing. This presents a challenge to authentication mechanisms, which, in addition to delivering high recognition performance, must be able to differentiate between live and fake users. Broadly speaking, spoofing attacks involve a series of manipulative actions with the goal of gaining illegitimate access to biometric authentication systems by presenting an artificial, rigged version of original biometric data to a system sensor. Spoofing attacks are also known as presentation attacks and defined in the first part of the ISO/IEC 30107 sta