Knowledge Graph Completion by Embedding with Bi-directional Projections
Knowledge graph (KG) completion aims at predicting the unknown links between entities and relations. In this paper, we focus on this task through embedding a KG into a latent space. Existing embedding based approaches such as TransH usually perform the sa
- PDF / 775,578 Bytes
- 13 Pages / 439.37 x 666.142 pts Page_size
- 1 Downloads / 224 Views
Abstract. Knowledge graph (KG) completion aims at predicting the unknown links between entities and relations. In this paper, we focus on this task through embedding a KG into a latent space. Existing embedding based approaches such as TransH usually perform the same operation on head and tail entities in a triple. Such way could ignore the different roles of head and tail entities in a relation. To resolve this problem, this paper proposes a novel method for KGs embedding by preforming bi-directional projections on head and tail entities. In this way, the different information of an entity could be elaborately captured when it plays different roles for a relation. The experimental results on multiple benchmark datasets demonstrate that our method significantly outperforms state-of-the-art methods. Keywords: Knowledge graph completion Knowledge reasoning
Knowledge graph embedding
1 Introduction Knowledge graphs (KGs) describe knowledge about entities and their relations with inter-linked fact triples (a triple (head entity, relation, tail entity) indicates the two entities hold the relation, denoted as (h, r, t)). The rich structured information of KGs become useful resources to support many intelligent applications such as question answering [1]. However, the low coverage is an urgent issue which hampers the wide utilization of KGs, e.g. even the largest KG of Freebase is still far from complete [2]. Knowledge graph completion (KGC) aims at predicting the links between relations and entities with the supervision of the existing KGs. Traditional approaches for KGC usually employ logic inference rules for knowledge reasoning [3], which lack ability for supporting numerical computation in continuous spaces, and cannot be effectively extended to large-scale KGs. To address this problem, a new approach based on representation learning was recently proposed by attempting to embed a KG into a low-dimensional continuous space. In this way, it admits that the original logic inference could be completed through numerical computation [4]. Therefore, the embedding based approaches have more expansibility and are more suitable for large-scale KGs. The promising methods usually represent the entities as point vectors in a low-dimensional spaces and represent relations as operations between two points in the © Springer International Publishing Switzerland 2016 D.-S. Huang et al. (Eds.): ICIC 2016, Part III, LNAI 9773, pp. 767–779, 2016. DOI: 10.1007/978-3-319-42297-8_71
768
W. Luo et al.
entity vector spaces. Among these methods, TransE [5] and its variants are simple and effective. TransE represents a relation as a vector r indicating the semantic translation from the head entity h to the tail entity t, which aims to satisfy the equation h + r ≈ t when triple (h, r, t) holds. TransE effectively handles 1-1 relations but has issues in handling one-to-many, many-to-one and many-to-many relations. To address such problem, TransH and TransR are proposed to enable an entity to have distinct representations when involved i
Data Loading...