Do you want to publish a course? Click here

Quantitatively ranking incorrect responses to multiple-choice questions using item response theory

87   0   0.0 ( 0 )
 Added by Trevor Smith
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allow us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses, than students who have a low understanding. Analyses using item response theory align with this assumption, and Bocks nominal response model allows us to uniquely rank each incorrect response. We present results from over 7,000 students responses to the Force and Motion Conceptual Evaluation.



rate research

Read More

72 - Brahim Lamine 2015
Conceptual tests are widely used by physics instructors to assess students conceptual understanding and compare teaching methods. It is common to look at students changes in their answers between a pre-test and a post-test to quantify a transition in students conceptions. This is often done by looking at the proportion of incorrect answers in the pre-test that changes to correct answers in the post-test -- the gain -- and the proportion of correct answers that changes to incorrect answers -- the loss. By comparing theoretical predictions to experimental data on the Force Concept Inventory, we shown that Item Response Theory (IRT) is able to fairly well predict the observed gains and losses. We then use IRT to quantify the students changes in a test-retest situation when no learning occurs and show that $i)$ up to 25% of total answers can change due to the non-deterministic nature of students answer and that $ii)$ gains and losses can go from 0% to 100%. Still using IRT, we highlight the conditions that must satisfy a test in order to minimize gains and losses when no learning occurs. Finally, recommandations on the interpretation of such pre/post-test progression with respect to the initial level of students are proposed.
Multiple-choice/multiple-response (MCMR) items (i.e., multiple-choice questions for which there may be more than one correct response) can be a valuable tool for assessment. Like traditional multiple-choice/single-response questions, they are easy to grade; but MCMR items may provide more information about student reasoning by probing multiple facets of reasoning in a single problem context. Because MCMR items are infrequently used, best practices for their implementation are not established. In this paper, we describe the administration of MCMR items on an online, research-based assessment. We discuss possible differences in performance on MCMR items that may result from differences in administration method (in-person vs. online). This work is presented as a potential first step toward establishing best-practices for the administration of MCMR items on online assessments.
Motivated by recent failures of polling to estimate populist party support, we propose and analyse two methods for asking sensitive multiple choice questions where the respondent retains some privacy and therefore might answer more truthfully. The first method consists of asking for the true choice along with a choice picked at random. The other method presents a list of choices and asks whether the preferred one is on the list or not. Different respondents are shown different lists. The methods are easy to explain, which makes it likely that the respondent understands how her privacy is protected and may thus entice her to participate in the survey and answer truthfully. The methods are also easy to implement and scale up.
194 - C. M. Raduta 2013
Learning physics is a context dependent process. I consider a broader interdisciplinary problem of where differences in understanding and reasoning arise. I suggest the long run effects a multiple choice based learning system as well as society cultural habits and rules might have on student reasoning structure.
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا