فهم عندما لا يوفر مقتطف النص معلومات سعية بعد هي جزء أساسي من اللغة الطبيعية Utnderstanding. العمل الحديث (Squad 2.0؛ Rajpurkar et al.، 2018) حاولت إحراز بعض التقدم في هذا الاتجاه من خلال إثراء بيانات الفريق الخاصة بمهمة ضمان الجودة الاستخراجية مع أسئلة لا يمكن إجراؤها. ومع ذلك، كما نعرض، فإن أداء النظام الأعلى المدرب على الفريق 2.0 قطرات إلى حد كبير سيناريوهات خارج المجال، مما يحد من استخدامه في المواقف العملية. من أجل دراسة هذا، نقوم ببناء كوربوس خارج المجال، مع التركيز على الأسئلة البسيطة القائمة على الأحداث والتمييز بين نوعين من الأسئلة الموجودة: أسئلة تنافسية، حيث يتضمن السياق كيان من نفس النوع مثل الإجابة المتوقعة، و أسئلة أبسط وغير تنافسية حيث لا يوجد أي كيان من نفس النوع في السياق. نجد أن النماذج المستندة إلى Squad 2.0 تفشل حتى في حالة الأسئلة الأكثر بساطة. ثم نحلل أوجه التشابه والاختلافات بين ظاهرة IDK في ضمان الجودة الاستخراجية ومهمة الاعتراف بالاتصالات النصية (RTE؛ Dagan et al.، 2013) والتحقيق في المدى الذي يمكن استخدامه الأخير لتحسين الأداء.
Understanding when a text snippet does not provide a sought after information is an essential part of natural language utnderstanding. Recent work (SQuAD 2.0; Rajpurkar et al., 2018) has attempted to make some progress in this direction by enriching the SQuAD dataset for the Extractive QA task with unanswerable questions. However, as we show, the performance of a top system trained on SQuAD 2.0 drops considerably in out-of-domain scenarios, limiting its use in practical situations. In order to study this we build an out-of-domain corpus, focusing on simple event-based questions and distinguish between two types of IDK questions: competitive questions, where the context includes an entity of the same type as the expected answer, and simpler, non-competitive questions where there is no entity of the same type in the context. We find that SQuAD 2.0-based models fail even in the case of the simpler questions. We then analyze the similarities and differences between the IDK phenomenon in Extractive QA and the Recognizing Textual Entailments task (RTE; Dagan et al., 2013) and investigate the extent to which the latter can be used to improve the performance.
References used
https://aclanthology.org/
Abstract Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask
Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i
Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve spec
Models of language trained on very large corpora have been demonstrated useful for natural language processing. As fixed artifacts, they have become the object of intense study, with many researchers probing'' the extent to which they acquire and rea
Rapid progress in Neural Machine Translation (NMT) systems over the last few years has focused primarily on improving translation quality, and as a secondary focus, improving robustness to perturbations (e.g. spelling). While performance and robustne