Scalable attack on graph data by injecting vicious nodes
- PDF / 1,109,114 Bytes
- 27 Pages / 439.37 x 666.142 pts Page_size
- 59 Downloads / 190 Views
Scalable attack on graph data by injecting vicious nodes Jihong Wang1 · Minnan Luo1 · Fnu Suya2 · Jundong Li3 · Zijiang Yang4 · Qinghua Zheng1 Received: 12 November 2019 / Accepted: 3 June 2020 © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2020
Abstract Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations. However, a vast majority of existing works cannot handle large-scale graphs because of their high time complexity. Additionally, existing works mainly focus on manipulating existing nodes on the graph, while in practice, attackers usually do not have the privilege to modify information of existing nodes. In this paper, we develop a more scalable framework named Approximate Fast Gradient Sign Method which considers a more practical attack scenario where adversaries can only inject new vicious nodes to the graph while having no control over the original graph. Methodologically, we provide an approximation strategy to linearize the model we attack and then derive an approximate closed-from solution with a lower time cost. To have a fair comparison with existing attack methods that manipulate the original graph, we adapt them to the new attack scenario by injecting vicious nodes. Empirical experimental results show that our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance. We have open-sourced the code of our method https://github.com/wangjhgithub/AFGSM. Keywords Graph Convolution Networks · Vicious Nodes · Scalable Attack
1 Introduction Graphs are widely used to model various types of real-world data and many canonical learning tasks such as classification, clustering, and anomaly detection have been widely investigated for the graph-structured data (Bhagat et al. 2011; Tian et al. 2014;
Responsible editor: Ira Assent, Carlotta Domeniconi, Aristides Gionis, Eyke Hüllermeier.
B
Minnan Luo [email protected]
Extended author information available on the last page of the article
123
J. Wang et al.
Perozzi et al. 2014a; Tang et al. 2016). In this paper, we focus on the task of node classification. Recently, to solve the node classification problem, graph convolution networks (GCNs) have gained a surge of research interests in the data mining and machine learning community because of their superior prediction performance (Pham et al. 2017; Cai et al. 2018; Monti et al. 2017). However, recent research efforts showed that various graph mining algorithms (e.g., GCNs) are vulnerable to carefully crafted adversarial examples, which are “unnoticeable” to humans but can cause the learning models to misclassify some target nodes (Zügner et al. 2018; Dai et al. 2018; Bojchevski and Günnemann 2018). The vulnerabilities of these learning algorithms can lead to severe consequences in securit
Data Loading...