آلة القراءة الفهم هي مهمة صعبة خاصة للاستعلام عن المستندات ذات السياقات العميقة والترابطية.أظهرت الطرق المستندة إلى المحولات عروضا متقدمة في هذه المهمة؛ومع ذلك، فإن معظمهم لا يزال يعاملون المستندات كمتسلسلة مسطحة من الرموز.يقترح هذا العمل طريقة جديدة قائمة على المحولات التي تقرأ مستند كشرائح شجرة.يحتوي على وحديتين لتحديد المزيد من مقاطع النص ذات الصلة وأفضل إجابة سبان على التوالي، والتي لا يتم تدريبها بشكل مشترك فقط ولكن أيضا تشاور بشكل مشترك في وقت الاستدلال.تظهر نتائج تقييمنا أن أسلوبنا المقترح تتفوق على العديد من النهج الأساسية التنافسية على مجموعة بيانات من مجالات متنوعة.
Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a flat sequence of tokens. This work proposes a new Transformer-based method that reads a document as tree slices. It contains two modules for identifying more relevant text passage and the best answer span respectively, which are not only jointly trained but also jointly consulted at inference time. Our evaluation results show that our proposed method outperforms several competitive baseline approaches on two datasets from varied domains.
References used
https://aclanthology.org/
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this pap
In machine reading comprehension tasks, a model must extract an answer from the available context given a question and a passage. Recently, transformer-based pre-trained language models have achieved state-of-the-art performance in several natural la
Transformer-based pre-trained models, such as BERT, have achieved remarkable results on machine reading comprehension. However, due to the constraint of encoding length (e.g., 512 WordPiece tokens), a long document is usually split into multiple chun
Machine reading comprehension (MRC) is one of the most challenging tasks in natural language processing domain. Recent state-of-the-art results for MRC have been achieved with the pre-trained language models, such as BERT and its modifications. Despi
With the recent breakthrough of deep learning technologies, research on machine reading comprehension (MRC) has attracted much attention and found its versatile applications in many use cases. MRC is an important natural language processing (NLP) tas