تصف هذه الورقة نهجنا (UR-IW-HNT) للمهمة المشتركة ل Germeval2021 لتحديد تعليقات السامة والمشاركة والحقائق المزعومة.قدمنا ثلاثة أشواط باستخدام استراتيجية كوئية من خلال التصويت بالأغلبية (الصعب) مع العديد من نماذج بيرت مختلفة من ثلاثة أنواع مختلفة: نماذج ألمانيا القائمة على Twitter، ومتعددة اللغات.تتفوق جميع نماذج الفرقة على النماذج الفردية، في حين أن Bertweet هو الفائز في جميع النماذج الفردية في كل فرعية.تؤدي النماذج المستندة إلى Twitter أفضل من نماذج Germanbert، وأداء النماذج متعددة اللغات سوءا ولكنها هامش صغير.
This paper describes our approach (ur-iw-hnt) for the Shared Task of GermEval2021 to identify toxic, engaging, and fact-claiming comments. We submitted three runs using an ensembling strategy by majority (hard) voting with multiple different BERT models of three different types: German-based, Twitter-based, and multilingual models. All ensemble models outperform single models, while BERTweet is the winner of all individual models in every subtask. Twitter-based models perform better than GermanBERT models, and multilingual models perform worse but by a small margin.
References used
https://aclanthology.org/
We describe our participation in all the subtasks of the Germeval 2021 shared task on the identification of Toxic, Engaging, and Fact-Claiming Comments. Our system is an ensemble of state-of-the-art pre-trained models finetuned with carefully enginee
In this paper, we report on our approach to addressing the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments for the German language. We submitted three runs for each subtask based on ensembles of three mo
In this work, we present our approaches on the toxic comment classification task (subtask 1) of the GermEval 2021 Shared Task. For this binary task, we propose three models: a German BERT transformer model; a multilayer perceptron, which was first tr
The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years. Pretrained models usually differ i
This paper addresses the identification of toxic, engaging, and fact-claiming comments on social media. We used the dataset made available by the organizers of the GermEval2021 shared task containing over 3,000 manually annotated Facebook comments in