تستخدم أسئلة متعددة الخيارات (MCQs) على نطاق واسع في تقييم المعرفة في المؤسسات التعليمية، أثناء مقابلات العمل، في الاختبارات الترفيهية والألعاب.على الرغم من أن البحث عن الجيل التلقائي أو شبه التلقائي من عناصر اختبار متعددة الخيارات قد أجريت منذ بداية هذه الألفية، تركز معظم الأساليب على توليد الأسئلة من جملة واحدة.في هذا البحث، يتم تقديم طريقة حديثة لإنشاء أسئلة بناء على جمل متعددة.كانت مستوحاة من مطابقات التشابه الدلالي المستخدمة في مكون ذاكرة الترجمة من أنظمة إدارة الترجمة.يتم مقارنة أداء اثنين من خوارزميات التعلم العميق، Doc2vec و Sbert، مهمة التشابه الفقرة.يتم إجراء التجارب على Corpus AD-HOC داخل مجال الاتحاد الأوروبي.للتقييم التلقائي، تم تجميع كائن أصغر من فقرات مطابقة مختارة يدويا.النتائج تثبت الأداء الجيد ل Argeddings الجملة للمهمة المحددة.
Multiple-choice questions (MCQs) are widely used in knowledge assessment in educational institutions, during work interviews, in entertainment quizzes and games. Although the research on the automatic or semi-automatic generation of multiple-choice test items has been conducted since the beginning of this millennium, most approaches focus on generating questions from a single sentence. In this research, a state-of-the-art method of creating questions based on multiple sentences is introduced. It was inspired by semantic similarity matches used in the translation memory component of translation management systems. The performance of two deep learning algorithms, doc2vec and SBERT, is compared for the paragraph similarity task. The experiments are performed on the ad-hoc corpus within the EU domain. For the automatic evaluation, a smaller corpus of manually selected matching paragraphs has been compiled. The results prove the good performance of Sentence Embeddings for the given task.
References used
https://aclanthology.org/
Language models are notoriously difficult to evaluate. We release SuperSim, a large-scale similarity and relatedness test set for Swedish built with expert human judgements. The test set is composed of 1,360 word-pairs independently judged for both r
Scenario-based question answering (SQA) requires retrieving and reading paragraphs from a large corpus to answer a question which is contextualized by a long scenario description. Since a scenario contains both keyphrases for retrieval and much noise
Machine Reading Comprehension (MRC), which requires a machine to answer questions given the relevant documents, is an important way to test machines' ability to understand human language. Multiple-choice MRC is one of the most studied tasks in MRC du
Searching for legal documents is a specialized Information Retrieval task that is relevant for expert users (lawyers and their assistants) and for non-expert users. By searching previous court decisions (cases), a user can better prepare the legal re
Human innovation in language, such as inventing new words, is a challenge for pretrained language models. We assess the ability of one large model, GPT-3, to process new words and decide on their meaning. We create a set of nonce words and prompt GPT