An Overview of Utilizing Knowledge Bases in Neural Networks for Question Answering
- PDF / 707,368 Bytes
- 17 Pages / 595.224 x 790.955 pts Page_size
- 85 Downloads / 188 Views
An Overview of Utilizing Knowledge Bases in Neural Networks for Question Answering Sabin Kafle1
· Nisansa de Silva1 · Dejing Dou1
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Question Answering (QA) requires understanding of queries expressed in natural languages and identification of relevant information content to provide an answer. For closed-world QAs, information access is obtained by means of either context texts, or a Knowledge Base (KB), or both. KBs are human-generated schematic representations of world knowledge. The representational ability of neural networks to generalize world information makes it an important component of current QA research. In this paper, we study the neural networks and QA systems in the context of KBs. Specifically, we focus on surveying methods for KB embedding, how such embeddings are integrated into the neural networks, and the role such embeddings play in improving performance across different question-answering problems. Our study of multiple question answering methods finds that the neural networks are able to produce state-of-art results in different question answering domains, and inclusion of additional information via KB embeddings further improve the performance of such approaches. Further progress in QA can be improved by incorporating more powerful representations of KBs. Keywords Knowledge base · Question answering · Neural networks
1 Introduction Neural Question Answering (NQA) has led to significant interest in question answering, especially due to the ability of modeling to incorporate multimodal information sources. To serve as a question-answering system, a typical neural network is capable of: leveraging text information via word or character embeddings (Mikolov et al. 2013); image representation (Wu et al. 2017) via pretrained representations; textual information using unsupervised large-scale language models (Devlin et al. 2019; Radford et al. 2019; Howard and Ruder 2018); and/or KBs using embedding methods similar to word embeddings (Bordes
Sabin Kafle
[email protected] Nisansa de Silva [email protected] Dejing Dou [email protected] 1
Department of Computer and Information Science, University of Oregon, Eugene, Oregon, USA
et al. 2013). NQAs systems largely follow a three-stage process, comprised of (a) information retrieval based on the question understanding; (b) answer extraction to generate an answer; and, optionally, (c) a ranking module, to rank the answers (Kratzwald et al. 2019). Knowledge Graphs (KGs) are the simpler representational form of Knowledge Bases (KBs), expressed in the form of triples of - entity, relation, entity -. Unlike KBs which represent a richer hierarchy and structure symbolic to the real-world model, KGs are much less constrained. The simpler representations of KGs have given rise to methods for the representation learning of entities and relations present in a KG. This is in line with advances in embedding methods for multimodal data representation. Most KBs are written in for
Data Loading...