تم إظهار نماذج الإجابة على الأسئلة (QA) للحصول على فهم القراءة لاستغلال تحيزات محددات غير مقصودة مثل السؤال - التداخل المعجمي السياق. هذا يعيق نماذج ضمان الجودة من المعمم إلى العينات الممثلة تمثيلا مثل الأسئلة ذات التداخل المعجمي المنخفض. يمكن أن يكون جيل السؤال (QG)، وهي طريقة لتعزيز مجموعات بيانات QA، حل تدهور الأداء إذا كان QG يمكن أن Debias QA بشكل صحيح. ومع ذلك، نكتشف أن نماذج QG العصبية الأخيرة متحيزة نحو توليد الأسئلة ذات التداخل المعجمي العالي، والتي يمكن أن تضخيم تحيز DataSet. علاوة على ذلك، يكشف تحليلنا أن تكبير البيانات مع نماذج QG هذه تضعف بشكل متكرر الأداء على الأسئلة مع التداخل المعجمي المنخفض، مع تحسين ذلك على الأسئلة ذات التداخل المعجمي العالي. لمعالجة هذه المشكلة، نستخدم نهج مرادف يستند إلى استبدال أسئلة زيادة التداخل المعجمي المنخفض. نوضح أن نهج تكبير البيانات المقترح بسيط ولكنه فعال في التخفيف من مشكلة التدهور مع الأمثلة الاصطناعية 70K فقط.
Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question--context lexical overlap. This hinders QA models from generalizing to under-represented samples such as questions with low lexical overlap. Question generation (QG), a method for augmenting QA datasets, can be a solution for such performance degradation if QG can properly debias QA datasets. However, we discover that recent neural QG models are biased towards generating questions with high lexical overlap, which can amplify the dataset bias. Moreover, our analysis reveals that data augmentation with these QG models frequently impairs the performance on questions with low lexical overlap, while improving that on questions with high lexical overlap. To address this problem, we use a synonym replacement-based approach to augment questions with low lexical overlap. We demonstrate that the proposed data augmentation approach is simple yet effective to mitigate the degradation problem with only 70k synthetic examples.
References used
https://aclanthology.org/
An overarching goal of natural language processing is to enable machines to communicate seamlessly with humans. However, natural language can be ambiguous or unclear. In cases of uncertainty, humans engage in an interactive process known as repair: a
NLP research in Hebrew has largely focused on morphology and syntax, where rich annotated datasets in the spirit of Universal Dependencies are available. Semantic datasets, however, are in short supply, hindering crucial advances in the development o
Although showing promising values to downstream applications, generating question and answer together is under-explored. In this paper, we introduce a novel task that targets question-answer pair generation from visual images. It requires not only ge
Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and
Abstract Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask