Visualizing and understanding graph convolutional network

  • PDF / 1,620,779 Bytes
  • 21 Pages / 439.37 x 666.142 pts Page_size
  • 110 Downloads / 194 Views

DOWNLOAD

REPORT


Visualizing and understanding graph convolutional network Xiaoying Shi 1 & Fanshun Lv 1 & Dewen Seng 1 & Jiaming Zhang 1 & Jing Chen 1 & Baixi Xing 1 Received: 30 August 2019 / Revised: 11 September 2020 / Accepted: 16 September 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract

The graph convolutional network (GCN), which can handle graph-structured data, is enjoying great interest in recent years. However, while GCN achieved remarkable results for different kinds of tasks, the source of its performance and the underlying decision process remain poorly understood. In this paper, we propose a visual analytics system that supports progressive analysis of GCN executing process and the effect of graph convolution operation. Multiple coordinated views are designed to show the influence of hidden layer parameters, the change of loss/accuracy and activation distributions, and the diffusion process of correctly predicted nodes. In particular, since the traditional t-SNE and force-directed layout methods are unable to show the graph-structured data well, we propose to utilize ‘graphTSNE’, a novel visualization technique for graph-structured data, to present the node layout in a clearer way. The real-world graph dataset is used to demonstrate the usability and effectiveness of our system through case studies. The results manifest that our system can provide sufficient guidance for understanding the working principle of graph convolutional network. Keywords Graph convolutional network . Visual analytics . Graph visualization . Interactive exploration

1 Introduction The graph convolutional network (GCN) [22], which can handle arbitrary graph-structured data, has received widespread attention in recent years. GCN has been successfully used in many applications, such as making recommendations for recommender systems [17, 24], forecasting traffic conditions [2, 13, 25], categorizing papers for citation networks [11, 20], Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11042-02009885-4) contains supplementary material, which is available to authorized users.

* Dewen Seng [email protected] Extended author information available on the last page of the article

Multimedia Tools and Applications

etc. However, since GCN models the graphs by learning a dense black-box hidden representation of their inputs, the underlying decision making process and the deeper reason of good model performance remain poorly understood. Data visualization and visual analytics [7] are particularly useful tools for deep neural network exploration and interpretation. However, to date, few visual analytics works have been reported for GCN models. The attempts of disclosing the internal operation and model behavior are promising. Due to the inherent complexity of the graph-based datasets, how to visualize the decisionmaking process of GCN is a great challenge. Firstly, the graph nodes usually contain high dimension features, and the relations among nodes are complex. Two typic