ندرس مشكلة تحديد السببية الحدث (ECI) للكشف عن العلاقة السببية بين الحدث تذكر أزواج في النص. على الرغم من أن نماذج التعلم العميق أظهرت مؤخرا الأداء الحديثة من أجل ECI، إلا أنها تقتصر على إعداد الجملة حيث يتم تقديم الحدث أزواج في نفس الجمل. يعالج هذا العمل هذه المشكلة من خلال تطوير نموذج تعليمي عميق جديد لبيئة المستوى ECI (DECI) لقبول حدث ما بين الجملة. على هذا النحو، نقترح نموذجا أساسيا في الرسم البياني يبني الرسوم البيانية التفاعلية لالتقاط الاتصالات ذات الصلة بين الكائنات المهمة ل DECI في مستندات الإدخال. ثم يتم بعد ذلك استهلاك رسوم الرسوم البيانية للتفاعل من قبل الشبكات التنافسية الرسمية لتعلم التمثيلات المعززة في المستندات للتنبؤ السببية بين الأحداث. يتم تقديم مصادر المعلومات المختلفة لإثراء الرسوم البيانية التفاعلية ل DECI، والتي تتميز بخطاب، بناء الجملة، والمعلومات الدلالية. تظهر تجاربنا الواسعة أن النموذج المقترح يحقق أداء حديثة في مجموعات بيانات قياسية.
We study the problem of Event Causality Identification (ECI) to detect causal relation between event mention pairs in text. Although deep learning models have recently shown state-of-the-art performance for ECI, they are limited to the intra-sentence setting where event mention pairs are presented in the same sentences. This work addresses this issue by developing a novel deep learning model for document-level ECI (DECI) to accept inter-sentence event mention pairs. As such, we propose a graph-based model that constructs interaction graphs to capture relevant connections between important objects for DECI in input documents. Such interaction graphs are then consumed by graph convolutional networks to learn document context-augmented representations for causality prediction between events. Various information sources are introduced to enrich the interaction graphs for DECI, featuring discourse, syntax, and semantic information. Our extensive experiments show that the proposed model achieves state-of-the-art performance on two benchmark datasets.
References used
https://aclanthology.org/
The goal of Event Factuality Prediction (EFP) is to determine the factual degree of an event mention, representing how likely the event mention has happened in text. Current deep learning models has demonstrated the importance of syntactic and semant
Event detection (ED) task aims to classify events by identifying key event trigger words embedded in a piece of text. Previous research have proved the validity of fusing syntactic dependency relations into Graph Convolutional Networks(GCN). While ex
Existing works on information extraction (IE) have mainly solved the four main tasks separately (entity mention recognition, relation extraction, event trigger detection, and argument extraction), thus failing to benefit from inter-dependencies betwe
Recent works show that the graph structure of sentences, generated from dependency parsers, has potential for improving event detection. However, they often only leverage the edges (dependencies) between words, and discard the dependency labels (e.g.
Recent progress in pretrained Transformer-based language models has shown great success in learning contextual representation of text. However, due to the quadratic self-attention complexity, most of the pretrained Transformers models can only handle