تشير عملية الإطار إلى ممارسة تحديد ما ستفعله، وما تتوقع أن تجده في دراستك، قبل إجراء الدراسة.هذه الممارسة شائعة بشكل متزايد في الطب وعلم النفس، ولكن نادرا ما تناقش في NLP.تناقش هذه الورقة التعددية بمزيد من التفصيل، يستكشف كيف يمكن للباحثين NLP أن يشرعون عملهم، ويعرض العديد من أسئلة تكرار مختلفة لأنواع مختلفة من الدراسات.أخيرا، نقول لصالح التقارير المسجلة، والتي يمكن أن توفر أسباب أكثر برسايكية للعلم البطيء في أبحاث NLP.الهدف من هذه الورقة هو إجراء مناقشة في مجتمع NLP، والذي نأمل في توليفه في نموذج بريوج تائون من NLP العام في البحوث المستقبلية.
Preregistration refers to the practice of specifying what you are going to do, and what you expect to find in your study, before carrying out the study. This practice is increasingly common in medicine and psychology, but is rarely discussed in NLP. This paper discusses preregistration in more detail, explores how NLP researchers could preregister their work, and presents several preregistration questions for different kinds of studies. Finally, we argue in favour of registered reports, which could provide firmer grounds for slow science in NLP research. The goal of this paper is to elicit a discussion in the NLP community, which we hope to synthesise into a general NLP preregistration form in future research.
References used
https://aclanthology.org/
Natural language processing (NLP) research combines the study of universal principles, through basic science, with applied science targeting specific use cases and settings. However, the process of exchange between basic NLP and applications is often
Research in Natural Language Processing is making rapid advances, resulting in the publication of a large number of research papers. Finding relevant research papers and their contribution to the domain is a challenging problem. In this paper, we add
We present a set of assignments for a graduate-level NLP course. Assignments are designed to be interactive, easily gradable, and to give students hands-on experience with several key types of structure (sequences, tags, parse trees, and logical form
In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for document-level representation learning. Additionally, our goal is to reveal new research opportunities to the audience, which will hopefully bring us closer to address existing challenges in this domain.
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking. Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model wil