No Arabic abstract
Multiple-choice/multiple-response (MCMR) items (i.e., multiple-choice questions for which there may be more than one correct response) can be a valuable tool for assessment. Like traditional multiple-choice/single-response questions, they are easy to grade; but MCMR items may provide more information about student reasoning by probing multiple facets of reasoning in a single problem context. Because MCMR items are infrequently used, best practices for their implementation are not established. In this paper, we describe the administration of MCMR items on an online, research-based assessment. We discuss possible differences in performance on MCMR items that may result from differences in administration method (in-person vs. online). This work is presented as a potential first step toward establishing best-practices for the administration of MCMR items on online assessments.
Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allow us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses, than students who have a low understanding. Analyses using item response theory align with this assumption, and Bocks nominal response model allows us to uniquely rank each incorrect response. We present results from over 7,000 students responses to the Force and Motion Conceptual Evaluation.
Learning physics is a context dependent process. I consider a broader interdisciplinary problem of where differences in understanding and reasoning arise. I suggest the long run effects a multiple choice based learning system as well as society cultural habits and rules might have on student reasoning structure.
Item response theory (IRT) models for categorical response data are widely used in the analysis of educational data, computerized adaptive testing, and psychological surveys. However, most IRT models rely on both the assumption that categories are strictly ordered and the assumption that this ordering is known a priori. These assumptions are impractical in many real-world scenarios, such as multiple-choice exams where the levels of incorrectness for the distractor categories are often unknown. While a number of results exist on IRT models for unordered categorical data, they tend to have restrictive modeling assumptions that lead to poor data fitting performance in practice. Furthermore, existing unordered categorical models have parameters that are difficult to interpret. In this work, we propose a novel methodology for unordered categorical IRT that we call SPRITE (short for stochastic polytomous response item model) that: (i) analyzes both ordered and unordered categories, (ii) offers interpretable outputs, and (iii) provides improved data fitting compared to existing models. We compare SPRITE to existing item response models and demonstrate its efficacy on both synthetic and real-world educational datasets.
This experiment was conducted to study the effect of administration {gamma}-amino butyric acid (GABA) on physiology performance of broiler chicks. The following treatments were used: T1:control treatment, T2: the birds were administration 0.2ml 0.4% GABA solution daily, T3: the birds were administration 0.2ml 0.5% GABA solution daily and T4: the birds were administration 0.2ml 0.6% GABA solution daily. The results of the experiment indicated that there were significant differences for the calculating blood lipids .When calculating blood enzymes, there are significant differences between the treatments
We investigate students sense of ownership of multiweek final projects in an upper-division optics lab course. Using a multiple case study approach, we describe three student projects in detail. Within-case analyses focused on identifying key issues in each project, and constructing chronological descriptions of those events. Cross-case analysis focused on identifying emergent themes with respect to five dimensions of project ownership: student agency, instructor mentorship, peer collaboration, interest and value, and affective responses. Our within- and cross-case analyses yielded three major findings. First, coupling division of labor with collective brainstorming can help balance student agency, instructor mentorship, and peer collaboration. Second, students interest in the project and perceptions of its value can increase over time; initial student interest in the project topic is not a necessary condition for student ownership of the project. Third, student ownership is characterized by a wide range of emotions that fluctuate as students alternate between extended periods of struggle and moments of success while working on their projects. These findings not only extend the literature on student ownership into a new educational domain---namely, upper-division physics labs---they also have concrete implications for the design of experimental physics projects in courses for which student ownership is a desired learning outcome. We describe the course and projects in sufficient detail that others can adapt our results to their particular contexts.