Do you want to publish a course? Click here

Dual Pointer Network for Fast Extraction of Multiple Relations in a Sentence

56   0   0.0 ( 0 )
 Added by Seongsik Park
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Relation extraction is a type of information extraction task that recognizes semantic relationships between entities in a sentence. Many previous studies have focused on extracting only one semantic relation between two entities in a single sentence. However, multiple entities in a sentence are associated through various relations. To address this issue, we propose a relation extraction model based on a dual pointer network with a multi-head attention mechanism. The proposed model finds n-to-1 subject-object relations using a forward object decoder. Then, it finds 1-to-n subject-object relations using a backward subject decoder. Our experiments confirmed that the proposed model outperformed previous models, with an F1-score of 80.8% for the ACE-2005 corpus and an F1-score of 78.3% for the NYT corpus.

rate research

Read More

Sentence ordering is one of important tasks in NLP. Previous works mainly focused on improving its performance by using pair-wise strategy. However, it is nontrivial for pair-wise models to incorporate the contextual sentence information. In addition, error prorogation could be introduced by using the pipeline strategy in pair-wise models. In this paper, we propose an end-to-end neural approach to address the sentence ordering problem, which uses the pointer network (Ptr-Net) to alleviate the error propagation problem and utilize the whole contextual information. Experimental results show the effectiveness of the proposed model. Source codes and dataset of this paper are available.
106 - Zhiqing Sun , Jian Tang , Pan Du 2019
Keyphrase extraction from documents is useful to a variety of applications such as information retrieval and document summarization. This paper presents an end-to-end method called DivGraphPointer for extracting a set of diversified keyphrases from a document. DivGraphPointer combines the advantages of traditional graph-based ranking methods and recent neural network-based approaches. Specifically, given a document, a word graph is constructed from the document based on word proximity and is encoded with graph convolutional networks, which effectively capture document-level word salience by modeling long-range dependency between words in the document and aggregating multiple appearances of identical words into one node. Furthermore, we propose a diversified point network to generate a set of diverse keyphrases out of the word graph in the decoding process. Experimental results on five benchmark data sets show that our proposed method significantly outperforms the existing state-of-the-art approaches.
102 - Hai Wang , Dian Yu , Kai Sun 2019
Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks. However, it is still difficult to interpret the predictions of existing MRC models. In this paper, we focus on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents. Due to the lack of ground truth evidence sentence labels in most cases, we apply distant supervision to generate imperfect labels and then use them to train an evidence sentence extractor. To denoise the noisy labels, we apply a recently proposed deep probabilistic logic learning framework to incorporate both sentence-level and cross-sentence linguistic indicators for indirect supervision. We feed the extracted evidence sentences into existing MRC models and evaluate the end-to-end performance on three challenging multiple-choice MRC datasets: MultiRC, RACE, and DREAM, achieving comparable or better performance than the same models that take as input the full reference document. To the best of our knowledge, this is the first work extracting evidence sentences for multiple-choice MRC.
88 - Qixuan Sun , Yaqi Yin , Hong Yu 2021
Emotion-cause pair extraction (ECPE), an emerging task in sentiment analysis, aims at extracting pairs of emotions and their corresponding causes in documents. This is a more challenging problem than emotion cause extraction (ECE), since it requires no emotion signals which are demonstrated as an important role in the ECE task. Existing work follows a two-stage pipeline which identifies emotions and causes at the first step and pairs them at the second step. However, error propagation across steps and pair combining without contextual information limits the effectiveness. Therefore, we propose a Dual-Questioning Attention Network to alleviate these limitations. Specifically, we question candidate emotions and causes to the context independently through attention networks for a contextual and semantical answer. Also, we explore how weighted loss functions in controlling error propagation between steps. Empirical results show that our method performs better than baselines in terms of multiple evaluation metrics. The source code can be obtained at https://github.com/QixuanSun/DQAN.
The development of natural language processing (NLP) in general and machine reading comprehension in particular has attracted the great attention of the research community. In recent years, there are a few datasets for machine reading comprehension tasks in Vietnamese with large sizes, such as UIT-ViQuAD and UIT-ViNewsQA. However, the datasets are not diverse in answers to serve the research. In this paper, we introduce UIT-ViWikiQA, the first dataset for evaluating sentence extraction-based machine reading comprehension in the Vietnamese language. The UIT-ViWikiQA dataset is converted from the UIT-ViQuAD dataset, consisting of comprises 23.074 question-answers based on 5.109 passages of 174 Wikipedia Vietnamese articles. We propose a conversion algorithm to create the dataset for sentence extraction-based machine reading comprehension and three types of approaches for sentence extraction-based machine reading comprehension in Vietnamese. Our experiments show that the best machine model is XLM-R_Large, which achieves an exact match (EM) of 85.97% and an F1-score of 88.77% on our dataset. Besides, we analyze experimental results in terms of the question type in Vietnamese and the effect of context on the performance of the MRC models, thereby showing the challenges from the UIT-ViWikiQA dataset that we propose to the language processing community.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا