أدى توافر تمثيلات اللغة التي تعلمتها نماذج الشبكة العصبية العصبية الكبيرة (مثل Bert and Electra) إلى تحسينات في العديد من مهام معالجة اللغة الطبيعية المصب في السنوات الأخيرة.تختلف النماذج المحددة عادة في الأهداف المحددة، والبنية، ومجموعات البيانات التي تم تدريبها عليها والتي يمكن أن تؤثر على أداء المصب.في هذه المساهمة، نحن نضرب نماذج بيرت الألمانية والألمانية الكترا لتحديد السامة (الفرعية 1)، وجذابة (SubTask 2)، وتعليقات تدعي الحقائق (SubTask 3) في بيانات Facebook المقدمة من مسابقة Germeval 2021.أنشأنا مجموعة من هذه النماذج والتحقيق في ما إذا كان أداء التصنيف يعتمد على عدد أعضاء الفرقة وتكوينهم.على بيانات خارج العينة، حققت أفضل مجموعة لدينا درجة ماكرو F1 من 0.73 (لجميع المهام الفرعية)، وعشرات F1 من 0.72، 0.70، و 0.76 للحصول على المهام الفرعية 1، 2، و 3، على التوالي.
The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years. Pretrained models usually differ in pretraining objectives, architectures, and datasets they are trained on which can affect downstream performance. In this contribution, we fine-tuned German BERT and German ELECTRA models to identify toxic (subtask 1), engaging (subtask 2), and fact-claiming comments (subtask 3) in Facebook data provided by the GermEval 2021 competition. We created ensembles of these models and investigated whether and how classification performance depends on the number of ensemble members and their composition. On out-of-sample data, our best ensemble achieved a macro-F1 score of 0.73 (for all subtasks), and F1 scores of 0.72, 0.70, and 0.76 for subtasks 1, 2, and 3, respectively.
References used
https://aclanthology.org/
In this paper, we report on our approach to addressing the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments for the German language. We submitted three runs for each subtask based on ensembles of three mo
We present the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments. This shared task comprises three binary classification subtasks with the goal to identify: toxic comments, engaging comments, and comments
This paper addresses the identification of toxic, engaging, and fact-claiming comments on social media. We used the dataset made available by the organizers of the GermEval2021 shared task containing over 3,000 manually annotated Facebook comments in
In this paper we present UPAppliedCL's contribution to the GermEval 2021 Shared Task. In particular, we participated in Subtasks 2 (Engaging Comment Classification) and 3 (Fact-Claiming Comment Classification). While acceptable results can be obtaine
In this paper we investigate the efficacy of using contextual embeddings from multilingual BERT and German BERT in identifying fact-claiming comments in German on social media. Additionally, we examine the impact of formulating the classification pro