آلة قراءة الآلة (MRC) هي واحدة من أكثر المهام تحديا في مجال معالجة اللغة الطبيعية. تم تحقيق نتائج أحدث حديثة ل MRC بنماذج اللغة المدربة مسبقا، مثل بيرت وتعديلاتها. على الرغم من ارتفاع الأداء لهذه النماذج، إلا أنهم لا يزالون يعانون من عدم القدرة على استرداد الإجابات الصحيحة من الممرات التفصيلية الطويلة. في هذا العمل، نقدم مخططا جديدا لإدماج هيكل الخطاب للنص في شبكة انتباهي، وبالتالي إثراء التضمين الذي تم الحصول عليه من ترميز بيرت القياسي مع المعرفة اللغوية الإضافية. نحقق أيضا في تأثير أنواع مختلفة من المعلومات اللغوية عن قدرة النموذج على الإجابة على الأسئلة المعقدة التي تتطلب فهم عميق للنص بأكمله. أظهرت التجارب التي تم إجراؤها على مرجع الفريق وأكثر تعقيدا عن مجموعات بيانات الأجابة أن المعزز اللغوي يعزز أداء نموذج بيرت القياسي بشكل كبير.
Machine reading comprehension (MRC) is one of the most challenging tasks in natural language processing domain. Recent state-of-the-art results for MRC have been achieved with the pre-trained language models, such as BERT and its modifications. Despite the high performance of these models, they still suffer from the inability to retrieve correct answers from the detailed and lengthy passages. In this work, we introduce a novel scheme for incorporating the discourse structure of the text into a self-attention network, and, thus, enrich the embedding obtained from the standard BERT encoder with the additional linguistic knowledge. We also investigate the influence of different types of linguistic information on the model's ability to answer complex questions that require deep understanding of the whole text. Experiments performed on the SQuAD benchmark and more complex question answering datasets have shown that linguistic enhancing boosts the performance of the standard BERT model significantly.
References used
https://aclanthology.org/
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this pap
Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a fla
Machine Reading Comprehension (MRC), which requires a machine to answer questions given the relevant documents, is an important way to test machines' ability to understand human language. Multiple-choice MRC is one of the most studied tasks in MRC du
The pivot for the unified Aspect-based Sentiment Analysis (ABSA) is to couple aspect terms with their corresponding opinion terms, which might further derive easier sentiment predictions. In this paper, we investigate the unified ABSA task from the p
Previous work indicates that discourse information benefits summarization. In this paper, we explore whether this synergy between discourse and summarization is bidirectional, by inferring document-level discourse trees from pre-trained neural summar