يجعل معالجة اللغة الطبيعية الحديثة (NLP) استخداما مكثفا لأساليب التعلم العميق بسبب الدقة التي تقدمها لمجموعة متنوعة من التطبيقات.نظرا للتأثير البيئي الكبير للبيئة للتعلم العميق، تم اقتراح تحليل التكلفة والفائدة بما في ذلك بصمة الكربون وكذلك تدابير الدقة لتحسين توثيق استخدام أساليب NLP للبحث أو النشر.في هذه الورقة، نراجع الأدوات المتاحة لقياس استخدام الطاقة وانبعاثات ثاني أكسيد الكربون لأساليب NLP.نحن تصف نطاق التدابير المقدمة ومقارنة استخدام ستة أدوات (تعقب الكربون، تعقب تأثير التجريب، الخوارزميات الخضراء، تأثير ثاني أكسيد الكربون، واستخدام الطاقة والاستزمي) على تجارب التعرف على الكيان المسماة المنجزة على إعدادات حسابية مختلفة (الخادم المحليمقابل مرفق الحوسبة).بناء على هذه النتائج، نقترح توصيات قابلة للتنفيذ لقياس الأثر البيئي بدقة تجارب NLP.
Modern Natural Language Processing (NLP) makes intensive use of deep learning methods because of the accuracy they offer for a variety of applications. Due to the significant environmental impact of deep learning, cost-benefit analysis including carbon footprint as well as accuracy measures has been suggested to better document the use of NLP methods for research or deployment. In this paper, we review the tools that are available to measure energy use and CO2 emissions of NLP methods. We describe the scope of the measures provided and compare the use of six tools (carbon tracker, experiment impact tracker, green algorithms, ML CO2 impact, energy usage and cumulator) on named entity recognition experiments performed on different computational set-ups (local server vs. computing facility). Based on these findings, we propose actionable recommendations to accurately measure the environmental impact of NLP experiments.
References used
https://aclanthology.org/
Domain divergence plays a significant role in estimating the performance of a model in new domains. While there is a significant literature on divergence measures, researchers find it hard to choose an appropriate divergence for a given NLP applicati
NLP systems rarely give special consideration to numbers found in text. This starkly contrasts with the consensus in neuroscience that, in the brain, numbers are represented differently from words. We arrange recent NLP work on numeracy into a compre
Abstract Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this s
We outline the Great Misalignment Problem in natural language processing research, this means simply that the problem definition is not in line with the method proposed and the human evaluation is not in line with the definition nor the method. We st
SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the