ترغب بنشر مسار تعليمي؟ اضغط هنا

Exploring student facility with goes like reasoning in introductory physics

65   0   0.0 ( 0 )
 نشر من قبل Charlotte Zimmerman
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Covariational reasoning -- reasoning about how changes in one quantity relate to changes in another quantity -- has been examined extensively in mathematics education research. Little research has been done, however, on covariational reasoning in introductory physics contexts. We explore one aspect of covariational reasoning: ``goes like reasoning. ``Goes like reasoning refers to ways physicists relate two quantities through a simplified function. For example, physicists often say that ``the electric field goes like one over r squared. While this reasoning mode is used regularly by physicists and physics instructors, how students make sense of and use it remains unclear. We present evidence from reasoning inventory items which indicate that many students are sense making with tools from prior math instruction, that could be developed into expert ``goes like thinking with direct instruction. Recommendations for further work in characterizing student sense making as a foundation for future development of instruction are made.



قيم البحث

اقرأ أيضاً

Proficiency with calculating, reporting, and understanding measurement uncertainty is a nationally recognized learning outcome for undergraduate physics lab courses. The Physics Measurement Questionnaire (PMQ) is a research-based assessment tool that measures such understanding. The PMQ was designed to characterize student reasoning into point or set paradigms, where the set paradigm is more aligned with expert reasoning. We analyzed over 500 student open-ended responses collected at the beginning and the end of a traditional introductory lab course at the University of Colorado Boulder. We discuss changes in students understanding over a semester by analyzing pre-post shifts in student responses regarding data collection, data analysis, and data comparison.
In deciding on a students grade in a class, an instructor generally needs to combine many individual grading judgments into one overall judgment. Two relatively common numerical scales used to specify individual grades are the 4-point scale (where ea ch whole number 0-4 corresponds to a letter grade) and the percent scale (where letter grades A through D are uniformly distributed in the top 40% of the scale). This paper uses grading data from a single series of courses offered over a period of 10 years to show that the grade distributions emerging from these two grade scales differed in many ways from each other. Evidence suggests that the differences are due more to the grade scale than to either the students or the instructors. One major difference is that the fraction of students given grades less than C- was over 5 times larger when instructors used the percent scale. The fact that each instructor who used both grade scales gave more than 4 times as many of these low grades under percent scale grading suggests that the effect is due to the grade scale rather than the instructor. When the percent scale was first introduced in these courses in 2006, one of the authors of this paper, who is also one of the instructors in this data set, had confidently predicted that any changes in course grading would be negligible. They were not negligible, even for this instructor.
One desired outcome of introductory physics instruction is that students will develop facility with reasoning quantitatively about physical phenomena. Little research has been done regarding how students develop the algebraic concepts and skills invo lved in reasoning productively about physics quantities, which is different from either understanding of physics concepts or problem-solving abilities. We introduce the Physics Inventory of Quantitative Literacy (PIQL) as a tool for measuring quantitative literacy, a foundation of mathematical reasoning, in the context of introductory physics. We present the development of the PIQL and evidence of its validity for use in calculus-based introductory physics courses. Unlike concept inventories, the PIQL is a reasoning inventory, and can be used to assess reasoning over the span of students instruction in introductory physics. Although mathematical reasoning associated with the PIQL is taught in prior mathematics courses, pre/post test scores reveal that this reasoning isnt readily used by most students in physics, nor does it develop as part of physics instruction--even in courses that use high-quality, research-based curricular materials. As has been the case with many inventories in physics education, we expect use of the PIQL to support the development of instructional strategies and materials--in this case, designed to meet the course objective that all students become quantitatively literate in introductory physics.
The Physics Inventory of Quantitative Literacy (PIQL), a reasoning inventory under development, aims to assess students physics quantitative literacy at the introductory level. The PIQLs design presents the challenge of isolating types of mathematica l reasoning that are independent of each other in physics questions. In its current form, the PIQL spans three principle reasoning subdomains previously identified in mathematics and physics education research: ratios and proportions, covariation, and signed (negative) quantities. An important psychometric objective is to test the orthogonality of these three reasoning subdomains. We present results from exploratory factor analysis, confirmatory factor analysis, and module analysis that inform interpretations of the underlying structure of the PIQL from a student viewpoint, emphasizing ways in which these results agree and disagree with expert categorization. In addition to informing the development of existing and new PIQL assessment items, these results are also providing exciting insights into students quantitative reasoning at the introductory level.
Computational Thinking (CT) is still a relatively new term in the lexicon of learning objectives and science standards. There is not yet widespread agreement on the precise definition or implementation of CT, and efforts to assess CT are still maturi ng, even as more states adopt K-12 computer science standards. In this article we will try to summarize what CT means for a typical introductory (i.e. high school or early college) physics class. This will include a discussion of the ways that instructors may already be incorporating elements of CT in their classes without knowing it. Our intention in writing this article is to provide a helpful, concise and readable introduction to this topic for physics instructors. We also put forward some ideas for what the future of CT in introductory physics may look like.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا