ترغب بنشر مسار تعليمي؟ اضغط هنا

Crowdsourcing Multiple Choice Science Questions

126   0   0.0 ( 0 )
 نشر من قبل Johannes Welbl
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.



قيم البحث

اقرأ أيضاً

137 - Siyu Ren , Kenny Q. Zhu 2020
In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge base to effectively create a small distractor candidate set, and a feature-rich learning-to-rank model to select distractors that are both plausible and reliable. Experimental results on datasets across four domains show that our framework yields distractors that are more plausible and reliable than previous methods. This dataset can also be used as a benchmark for distractor generation in the future.
Motivated by recent failures of polling to estimate populist party support, we propose and analyse two methods for asking sensitive multiple choice questions where the respondent retains some privacy and therefore might answer more truthfully. The fi rst method consists of asking for the true choice along with a choice picked at random. The other method presents a list of choices and asks whether the preferred one is on the list or not. Different respondents are shown different lists. The methods are easy to explain, which makes it likely that the respondent understands how her privacy is protected and may thus entice her to participate in the survey and answer truthfully. The methods are also easy to implement and scale up.
330 - Eric Li , Jingyi Su , Hao Sheng 2020
Multiple-choice questions (MCQs) offer the most promising avenue for skill evaluation in the era of virtual education and job recruiting, where traditional performance-based alternatives such as projects and essays have become less viable, and gradin g resources are constrained. The automated generation of MCQs would allow assessment creation at scale. Recent advances in natural language processing have given rise to many complex question generation methods. However, the few methods that produce deployable results in specific domains require a large amount of domain-specific training data that can be very costly to acquire. Our work provides an initial foray into MCQ generation under high data-acquisition cost scenarios by strategically emphasizing paraphrasing the question context (compared to the task). In addition to maintaining semantic similarity between the question-answer pairs, our pipeline, which we call AGenT Zero, consists of only pre-trained models and requires no fine-tuning, minimizing data acquisition costs for question generation. AGenT Zero successfully outperforms other pre-trained methods in fluency and semantic similarity. Additionally, with some small changes, our assessment pipeline can be generalized to a broader question and answer space, including short answer or fill in the blank questions.
130 - Nurulla Azamov 2015
In this work we describe a simple MATLAB based language which allows to create randomized multiple choice questions with minimal effort. This language has been successfully tested at Flinders University by the author in a number of mathematics topics including Numerical Analysis, Abstract Algebra and Partial Differential Equations. The open source code of Spike is available at: https://github.com/NurullaAzamov/Spike. Enquiries about Spike should be sent to [email protected]
We present a novel approach to answer the Chinese elementary school Social Study Multiple Choice questions. Although BERT has demonstrated excellent performance on Reading Comprehension tasks, it is found not good at handling some specific types of q uestions, such as Negation, All-of-the-above, and None-of-the-above. We thus propose a novel framework to cascade BERT with a Pre-Processor and an Answer-Selector modules to tackle the above challenges. Experimental results show the proposed approach effectively improves the performance of BERT, and thus demonstrate the feasibility of supplementing BERT with additional modules.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا