Security and Privacy Issues in Deep Learning: A Brief Review

  • PDF / 1,395,547 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 10 Downloads / 241 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH

Security and Privacy Issues in Deep Learning: A Brief Review Trung Ha1,3 · Tran Khanh Dang2,3   · Hieu Le2,3 · Tuan Anh Truong2,3 Received: 30 April 2020 / Accepted: 15 July 2020 © Springer Nature Singapore Pte Ltd 2020

Abstract Nowadays, deep learning is becoming increasingly important in our daily life. The appearance of deep learning in many applications in life relates to prediction and classification such as self-driving, product recommendation, advertisements and healthcare. Therefore, if a deep learning model causes false predictions and misclassification, it can do great harm. This is basically a crucial issue in the deep learning model. In addition, deep learning models use large amounts of data in the training/learning phases, which contain sensitive information. Therefore, when deep learning models are used in real-world applications, it is required to protect the privacy information used in the model. In this article, we carry out a brief review of the threats and defenses methods on security issues for the deep learning models and the privacy of the data used in such models while maintaining their performance and accuracy. Finally, we discuss current challenges and future developments. Keywords  Security in deep learning · Privacy in deep learning · Differential privacy · Gradient descent · Threat · Defense

Introduction Deep learning has many applications in life such as speech processing, biometric security, self-driving cars, health prediction, financial technology, and retail [1]. Each application has its own specific requirements depending on the nature of the data and the user’s intent. The researchers proposed many models to meet the application requirements, users and characteristics of each type of application such as LeNet, VGG, GoogleNet, Inception, ResNet. However, major security-related weaknesses of the deep learning systems have recently been discovered and there have been a number of studies published on this issue. Although many researches This article is part of the topical collection “Software Technology and Its Enabling Computing Platforms” guest edited by Lam-Son Lê and Michel Toulouse. * Tran Khanh Dang [email protected] 1



University of Information Technology, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam

2



Ho Chi Minh City University of Technology (HCMUT), 268 Ly Thuong Kiet street, District 10, Ho Chi Minh City, Vietnam

3

Vietnam National University Ho Chi Minh City (VNU-HCM), Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam



have been published relevant to both attacking and protecting users’ privacy and security techniques, they are still fragmented. Before Tramèr proposed the R-FGSM algorithm, he has reviewed some attack methods according to FGSM and GAN in [2]. In addition, security issues in the deep learning model are presented by Xiaoyong Yuan [3]. The above studies have only focused on the security of the deep learning model, which does not have an overview of protecting privacy in the deep learning model [4,

Data Loading...