t-PINE: tensor-based predictable and interpretable node embeddings
- PDF / 2,606,293 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 96 Downloads / 233 Views
ORIGINAL ARTICLE
t‑PINE: tensor‑based predictable and interpretable node embeddings Saba Al‑Sayouri1 · Ekta Gujral2 · Danai Koutra3 · Evangelos E. Papalexakis2 · Sarah S. Lam1 Received: 20 December 2018 / Revised: 8 May 2020 / Accepted: 8 May 2020 © Springer-Verlag GmbH Austria, part of Springer Nature 2020
Abstract Graph representations have increasingly grown in popularity during the last years. Existing representation learning approaches explicitly encode network structure. Despite their good performance in downstream processes (e.g., node classification, link prediction), there is still room for improvement in different aspects, such as efficacy, visualization, and interpretability. In this paper, we propose, t-PINE, a method that addresses these limitations. Contrary to baseline methods, which generally learn explicit graph representations by solely using an adjacency matrix, t-PINE avails a multi-view information graph—the adjacency matrix represents the first view, and a nearest neighbor adjacency, computed over the node features, is the second view—in order to learn explicit and implicit node representations, using the Canonical Polyadic (a.k.a. CP) decomposition. We argue that the implicit and the explicit mapping from a higher-dimensional to a lower-dimensional vector space is the key to learn more useful, highly predictable, and gracefully interpretable representations. Having good interpretable representations provides a good guidance to understand how each view contributes to the representation learning process. In addition, it helps us to exclude unrelated dimensions. Extensive experiments show that t-PINE drastically outperforms baseline methods by up to 351.5% with respect to Micro-F1, in several multi-label classification problems, while it has high visualization and interpretability utility. Keywords Information networks · Representation learning · Graph embeddings · Tensor decomposition
1 Introduction
* Saba Al‑Sayouri [email protected] Ekta Gujral [email protected] Danai Koutra [email protected] Evangelos E. Papalexakis [email protected] Sarah S. Lam [email protected] 1
System Science and Industrial Engineering Department, Binghamton University, 4400 Vestal Pkwy E, Binghamton, NY 13902, USA
2
Computer Science and Engineering Department, University of California Riverside, 446 N Campus Dr, Riverside, CA 92507, USA
3
Computer Science and Engineering Department, University of Michigan, 2260 Hayward St, Ann Arbor, MI 48109, USA
Graphs are widely used to encode relationships in real-world networks, such as social networks, co-authorship networks, and biological networks. Among the various approaches that have been proposed for network analysis, representation learning has gained significant popularity recently. Representation learning techniques (Perozzi et al. 2014; Grover and Leskovec 2016; Tang et al. 2015) primarily aim to explicitly learn a unified set of representations in a completely unsupervised or semi-supervised manner, which ultimately can generalize across
Data Loading...