Do you want to publish a course? Click here

Test Impact and Test Design: Insights from the Syrian National Baccalaureate Examination of English

تأثير الاختبار و تصميمه: رؤى من امتحان اللغة الانكليزيّة للشهادة الثانويّة

1287   3   13   3.5 ( 2 )
 Publication date 2018
  fields Education
and research's language is العربية
 Created by Mai Mohamad




Ask ChatGPT about the research

Testing in the Syrian educational system has been growing in the past six years with the average number of tests that schools and colleges set every year increased three-folds. This test inflation paved the way to the birth of a ‘testocracy’ that brought about new challenges for stakeholders and test developers. Of all the tests that Syrian students take, the National Baccalaureate Examination (NBE from here onwards) is the most critical. In the present research we try to shed light on one part of this test, namely the NBE of English language. Within the broad lines of language testing, we aimed to investigate the possibility of predicting certain facets of test impact via close examination of the test template in isolation from other factors in the teaching/learning environment.



References used
Alderson, J.C. & Wall, D. (1993). Does washback exist? Applied Linguistics, 14: 115-29.
Bachman, L., & Palmer, A. (2010). Language assessment in practice. New York: Oxford University Press.
Bailey, K. (1996). Working for washback: A review of the washback concept in language testing. Language Testing, 13: 257-279.
Brindley, G. (2002). Issues in language assessment. In The Oxford Handbook of Applied Linguistics. Ed. Robert B. Kaplan. New York: Oxford University Press, 459- 470.
Carroll, J. B. (1961). Fundamental considerations in language testing. In Language Testing and Assessment. Ed. A. J. Kunnan. New York: Rutledge, 43- 51.
Cheng, L. (2005). Changing language teaching through language testing: A washback study. Cambridge: Cambridge University Press.
Cheng, L. (2008). Washback, impact and consequences. In Encyclopaedia of Language and Education. Ed. E. Shohamy and N.H. Hornberger, 2nd Ed. Language Testing and Assessment, 7. New York: Springer Science + Business Media LLC, 349-364.
Cheng, L., Sun, Y. & Ma J. (2015). Review of washback research literature within Kane's argument-based validation framework. Language Testing, 48 (4): 436-470.
Hughes, A. (1993). Backwash and TOEFL 2000. Unpublished manuscript, University of Reading.
Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge University Press.
Madaus, G. F., (1988). The influence of testing on the curriculum. In Critical Issues in Curriculum: Eighty-Seventh Yearbook of the National Society for the Study of Education. Ed. Tanner, L.N. Chicago: University of Chicago Press, 83-121.
Madaus, G. F. (1985). Public policy and the testing profession: You’ve never had it so good? Educational Measurement: Issues and Practice, 4: 5–11.
Messick, S. (1996). Validity and washback in language testing. Language Testing, 13: 241–256.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis. London: Sage.
MOE. (2016). Syrian ministry of education: annual educational review (report): Part 1. Damascus: MOE Archive Department.
Rajab, T. (2013). Developing whole-class interactive teaching: meeting the training needs of Syrian EFL secondary school teachers (Doctoral Dissertation). Retrieved from http://etheses.whiterose.ac.uk/id/eprint/3868. (Accessed: 12 Jan, 2016).
SANA. (2017). http://www.sana.sy/?p=20108 (Accessed: 14 July, 2017).
Oller, J. W. Jr. (1973). Discrete-point tests versus tests of integrative skills. In Language Testing and Assessment. Ed. A. J. Kunnan. New York: Rutledge, 60- 87.
Wall, D., & Alderson, J. C. (1993). Examining washback: The Sri Lankan impact study. Language Testing, 10: 41–69.
Wall, D. (1996). Introducing new tests into traditional systems: insights from general education and from innovation theory. Language Testing, 13: 334–357.
Wall, D. (2000). The impact of high-stakes testing on teaching and learning: can this be predicted or controlled? System, 28:499-509.
Wall, D. (2005). The impact of high-stakes examinations on classroom teaching: A case study using insights from testing and innovation theory. Cambridge: University of Cambridge ESOL Examinations and Cambridge University Press.
Wall, D. (2012). Washback. In The Routledge Handbook of Language Testing. Eds. Glenn F. and Fred D. New York: Routledge University Press, 79-92.
Watanabe, Y. (1996). Does grammar translation come from the entrance examination? Preliminary findings from classroom-based research. Language Testing, 13(3): 318–333.
Watanabe, Y. (2004). Methodology in washback studies. In Washback in Language Testing. Eds. L. Cheng and Y. Watanabe. Mahwah: Lawrence Erlbaum Associates, 19-36.
Winke, P. (2011). Evaluating the validity of a high-stakes ESL test: why teachers’ perceptions matter. TESOL Quarterly,45 (4): 628-660.
Xie, Q. and Andrews, S. (2013). Do test design and uses influence test preparation? Testing a model of washback with structural equation modeling. Language Testing, 30 (1): 49 –70.
rate research

Read More

Participation in inter-laboratory comparison programs is an important means of laboratory quality control and assessing laboratory performance, and these programs can be used by customers or regulatory bodies for the selection of qualified laborato ries. This research describes how to use inter-comparison tests and how to statistically analyse the test results. This research has a practical study of assessing laboratories performance in laboratories of the Syrian textile firms by distributing samples simultaneously to participating laboratories for testing. After collecting test results, the researcher used scientific methods to handle data to identify the weak points in laboratories performance and provide them the Feedback and technical advice to Assistance the lab to defining the measurement problems and evaluating of test methods and instrumentation , and could introduce some suggestions and recommendations to overcome.
This research aims to show the importance of ensuring the competence of all who operate specific equipment, perform tests and/or calibrations, evaluate results, and sign test reports and calibration certificates.
The research aims to estimate the effect of sample size on the statistical test power (t) for one sample, two interrelated samples, two independent samples, and on the statistical test power of one-way analysis of variance test (F) to compare the averages. The descriptive method was used, and different sizes of samples (300) items, where it was generated using the program (PASS 14), and taken into account to be realized in this data the set of assumptions needed to make test (t) and (F), with respect to random testing, categorical level of measurement, normal distribution, and equinoctial variance.
Language use differs between domains and even within a domain, language use changes over time. For pre-trained language models like BERT, domain adaptation through continued pre-training has been shown to improve performance on in-domain downstream t asks. In this article, we investigate whether temporal adaptation can bring additional benefits. For this purpose, we introduce a corpus of social media comments sampled over three years. It contains unlabelled data for adaptation and evaluation on an upstream masked language modelling task as well as labelled data for fine-tuning and evaluation on a downstream document classification task. We find that temporality matters for both tasks: temporal adaptation improves upstream and temporal fine-tuning downstream task performance. Time-specific models generally perform better on past than on future test sets, which matches evidence on the bursty usage of topical words. However, adapting BERT to time and domain does not improve performance on the downstream task over only adapting to domain. Token-level analysis shows that temporal adaptation captures event-driven changes in language use in the downstream task, but not those changes that are actually relevant to task performance. Based on our findings, we discuss when temporal adaptation may be more effective.
This research aims to present the importance of using statistical methods while establishing a quality management system in the laboratory according to the requirements of the international standard ISO 17025:2005. In addition the research describ es how statistical analysis of the tests results works and includes a practical study to evaluate the technical competence of the laboratory by using the most common statistical methods (hypothesis testing) to study the results in a scientific way enables researchers to identify weaknesses in the laboratory performance, and thus provides it with feedback and technical advice helping to determine measurement problems and to check the Trueness of tests results. Finally, the research provides recommendations and proposals such as a necessity of applying practical methods for monitoring the performance of tests , making sure they meet quality requirements in terms of trueness and precision , and working to remove the causes that affect the quality of performance during all phases of testing, these proposals would – if they have been applied – support the laboratory to obtain the certification in accordance with international standard ISO 17025:2005.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا