Adversarial network embedding using structural similarity

  • PDF / 582,184 Bytes
  • 10 Pages / 612.284 x 802.205 pts Page_size
  • 24 Downloads / 242 Views

DOWNLOAD

REPORT


Adversarial network embedding using structural similarity Zihan ZHOU, Yu GU

, Ge YU

School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China c Higher Education Press 2020 

Abstract Network embedding which aims to embed a given network into a low-dimensional vector space has been proved effective in various network analysis and mining tasks such as node classification, link prediction and network visualization. The emerging network embedding methods have shifted of emphasis in utilizing mature deep learning models. The neuralnetwork based network embedding has become a mainstream solution because of its high efficiency and capability of preserving the nonlinear characteristics of the network. In this paper, we propose Adversarial Network Embedding using Structural Similarity (ANESS), a novel, versatile, low-complexity GANbased network embedding model which utilizes the inherent vertex-to-vertex structural similarity attribute of the network. ANESS learns robustness and effective vertex embeddings via a adversarial training procedure. Specifically, our method aims to exploit the strengths of generative adversarial networks in generating high-quality samples and utilize the structural similarity identity of vertexes to learn the latent representations of a network. Meanwhile, ANESS can dynamically update the strategy of generating samples during each training iteration. The extensive experiments have been conducted on the several benchmark network datasets, and empirical results demonstrate that ANESS significantly outperforms other state-of-theart network embedding methods. Keywords network embedding, structural similarity, generative adversarial network

1

Introduction

The network is a traditional data structure that can organize data entities with complex relationships. Specifically, vertexes and edges in a network can naturally encode abundant information. Networks are ubiquitous in the real world such as academic citation networks, airline networks, protein-protein interaction networks and various social networks. Due to the inherent powerful expressiveness and intricate structure, network-based data mining and analysis tasks are very important but challenging in the real-life applications. As a fundamental tool to analyze networks, network embedding [1, 2] has attracted considerable research attention recently, which aims to embed networks into a low-dimensional space by preserving the original structural features and other Received May 22, 2019; accepted January 8, 2020 E-mail: [email protected]

information. These learned low-dimensional representation vectors can serve as input to accelerate downstream network analysis tasks [3–7] and reduce storage overhead. A wide variety of network embedding methods have been proposed in recent years, such as DeepWalk [2], LINE [8], Node2Vec [9] and SDNE [10]. Essentially, network embedding is expected to organize original similar vertexes together while dispersing different vertexes in the low-dimensional space. Mathematically, netw