ترغب بنشر مسار تعليمي؟ اضغط هنا

Consequences the extensive use of multiple-choice questions might have on students reasoning structure

169   0   0.0 ( 0 )
 نشر من قبل Apolodor Aristotel Raduta
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English
 تأليف C. M. Raduta




اسأل ChatGPT حول البحث

Learning physics is a context dependent process. I consider a broader interdisciplinary problem of where differences in understanding and reasoning arise. I suggest the long run effects a multiple choice based learning system as well as society cultural habits and rules might have on student reasoning structure.

قيم البحث

اقرأ أيضاً

Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student sele cts correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allow us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses, than students who have a low understanding. Analyses using item response theory align with this assumption, and Bocks nominal response model allows us to uniquely rank each incorrect response. We present results from over 7,000 students responses to the Force and Motion Conceptual Evaluation.
Physics lab courses are integral parts of an undergraduate physics education, and offer a variety of opportunities for learning. Many of these opportunities center around a common learning goal in introductory physics lab courses: measurement uncerta inty. Accordingly, when the stand-alone introductory lab course at the University of Colorado Boulder (CU) was recently transformed, measurement uncertainty was the focus of a learning goal of that transformation. The Physics Measurement Questionnaire (PMQ), a research-based assessment of student understanding around statistical measurement uncertainty, was used to measure the effectiveness of that transformation. Here, we analyze student responses to the PMQ at the beginning and end of the CU course. We also compare such responses from two semesters: one before and one after the transformation. We present evidence that students in both semesters shifted their reasoning in ways aligned with the measurement uncertainty learning goal. Furthermore, we show that more students in the transformed semester shifted in ways aligned with the learning goal, and that those students tended to communicate their reasoning with greater sophistication than students in the original course. These findings provide evidence that even a traditional lab course can support valuable learning, and that transforming such a course to align with well-defined learning goals can result in even more effective learning experiences.
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
Multiple-choice/multiple-response (MCMR) items (i.e., multiple-choice questions for which there may be more than one correct response) can be a valuable tool for assessment. Like traditional multiple-choice/single-response questions, they are easy to grade; but MCMR items may provide more information about student reasoning by probing multiple facets of reasoning in a single problem context. Because MCMR items are infrequently used, best practices for their implementation are not established. In this paper, we describe the administration of MCMR items on an online, research-based assessment. We discuss possible differences in performance on MCMR items that may result from differences in administration method (in-person vs. online). This work is presented as a potential first step toward establishing best-practices for the administration of MCMR items on online assessments.
Motivated by recent failures of polling to estimate populist party support, we propose and analyse two methods for asking sensitive multiple choice questions where the respondent retains some privacy and therefore might answer more truthfully. The fi rst method consists of asking for the true choice along with a choice picked at random. The other method presents a list of choices and asks whether the preferred one is on the list or not. Different respondents are shown different lists. The methods are easy to explain, which makes it likely that the respondent understands how her privacy is protected and may thus entice her to participate in the survey and answer truthfully. The methods are also easy to implement and scale up.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا