On defending against label flipping attacks on malware detection systems

  • PDF / 1,827,144 Bytes
  • 20 Pages / 595.276 x 790.866 pts Page_size
  • 50 Downloads / 203 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789(). ,- volV)

ORIGINAL ARTICLE

On defending against label flipping attacks on malware detection systems Rahim Taheri1 • Reza Javidan1 • Mohammad Shojafar2 • Zahra Pooranian2 • Ali Miri3 • Mauro Conti2 Received: 23 July 2019 / Accepted: 4 March 2020  The Author(s) 2020

Abstract Label manipulation attacks are a subclass of data poisoning attacks in adversarial machine learning used against different applications, such as malware detection. These types of attacks represent a serious threat to detection systems in environments having high noise rate or uncertainty, such as complex networks and Internet of Thing (IoT). Recent work in the literature has suggested using the K-nearest neighboring algorithm to defend against such attacks. However, such an approach can suffer from low to miss-classification rate accuracy. In this paper, we design an architecture to tackle the Android malware detection problem in IoT systems. We develop an attack mechanism based on silhouette clustering method, modified for mobile Android platforms. We proposed two convolutional neural network-type deep learning algorithms against this Silhouette Clustering-based Label Flipping Attack. We show the effectiveness of these two defense algorithms—label-based semi-supervised defense and clustering-based semi-supervised defense—in correcting labels being attacked. We evaluate the performance of the proposed algorithms by varying the various machine learning parameters on three Android datasets: Drebin, Contagio, and Genome and three types of features: API, intent, and permission. Our evaluation shows that using random forest feature selection and varying ratios of features can result in an improvement of up to 19% accuracy when compared with the state-of-the-art method in the literature. Keywords Adversarial machine learning (AML)  Semi-supervised defense (SSD)  Malware detection  Adversarial example  Label flipping attacks  Deep learning

1 Introduction Machine learning (ML) algorithms have the ability to accurately predict patterns in data. However, some of the data can come from uncertain and untrustworthy sources. Attackers can exploit this vulnerability as part of what is known as adversarial machine learning (AML) attacks. Poisoning attacks or data poisoning attacks are a subclass

of AML attacks, in which attackers inject malicious data into the training set in order to compromise the learning process, and effect the algorithm performance in a targeted manner. Label flipping attacks are a special type of data poisoning, in which the attacker can control labels assigned to a fraction of training points. Label flipping attacks can significantly diminish the performance of the system, even if the attacker’s capabilities are otherwise limited. Recent

& Mohammad Shojafar [email protected]; [email protected] Rahim Taheri [email protected]

Mauro Conti [email protected] 1

Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran

Reza Javid