Using context information to enhance simple question answering
- PDF / 1,909,353 Bytes
- 29 Pages / 439.642 x 666.49 pts Page_size
- 94 Downloads / 203 Views
Using context information to enhance simple question answering Lin Li1
· Mengjing Zhang1 · Zhaohui Chao1 · Jianwen Xiang1
Received: 19 October 2019 / Revised: 1 June 2020 / Accepted: 10 September 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract With the rapid development of knowledge bases (KBs), question answering (QA) based on KBs has become a hot research issue. The KB-QA technology can be divided into two technical routes: (1) symbol based representations, such as traditional semantic parsing, and (2) distribution based embedding. With the emergence of deep learning, the development of NLP has greatly promoted. The effect of KB-QA can be improved by combining deep learning with the above two technical routes respectively. In this paper, the impact of the second route (i.e., Distribution Embedding) combined with deep learning is mainly discussed. This route can be divided into pipeline frameworks and end-to-end frameworks. For comprehensive analysis, two frameworks (i.e., a pipeline framework, an end-to-end framework) are proposed to focus on answering single-relation factoid questions. In both of two frameworks, the effect of context information on the quality of QA is studied, such as the entity’s notable type, out-degree. In the pipeline framework, it includes two cascaded steps: entity detection and relation detection. In this framework, multiple modules need to be built, and corresponding training data sets must be constructed for them respectively. The entire process of the pipleine framework is complex, costly and has the problem of error propagation. In the end-to-end framework, the two subtasks of entity detection and relation detection are merged together, and then combined into one framework. Questions, entities and relations are mapped into the same semantic space through the encoding of the recurrent neural network. Moreover, the question-entity similarity and the question-relation similarity are calculated, so that the candidate answers can be sorted and selected. Moreover, character-level(char-level) encoding and self-attention mechanisms are combined using weight sharing and multi-task strategies to enhance the accuracy of QA. Experimental results show that context information can get better results of simple QA whether it is the pipeline framework or the end-to-end framework. In addition, the end-to-end framework achieves results competitive with state-of-the-art approaches in terms of accuracy. Keywords Question answering · Knowledge base · Context information · Self-attention mechanisms Lin Li
[email protected]
Extended author information available on the last page of the article.
World Wide Web
1 Introduction QA is a classic natural language processing (NLP) task, which aims at building systems that automatically answer questions formulated in natural language [8], such as community question answering services (CQA) [49], question answering over knowledge base (KBQA) [59], etc. KB-QA is defined as the task of retrieving the correct entity or
Data Loading...