في هذا العمل، نقدم نهجنا على مهمة تصنيف التعليقات السامة (الفرعية 1) من المهمة المشتركة لجيرفال 2021.لهذه المهمة الثنائية، نقترح ثلاث نماذج: نموذج محول بيرت ألماني؛Perceptron متعدد الطبقات، التي تم تدريبها لأول مرة بالتوازي على الإدخال النصي و 14 ميزات لغوية إضافية ثم تم تسليمها في طبقة إضافية؛ومثيرة الطبقات متعدد الطبقات مع كل من أنواع الميزات كمدخلات.عززنا النموذج المحول المدرب مسبقا من خلال إعادة تدريبه مع أكثر من مليون تغريدة وصقله على مجموعة بيانات ألمانية إضافية من مهام مماثلة.تم اتخاذ أنظمة بيرت الألمانية النهائية التي تم ضبطها بشكل نهائي كميزات مدخلات نصية لشبكاتنا العصبية.كانت أفضل النماذج الخاصة بنا في بيانات التحقق من الصحة كانت شبكات عصبية، لكن بيرت الألمانية المعززة المكتسبة مع درجة F1 = 0.5895 تنبؤ أعلى في بيانات الاختبار.
In this work, we present our approaches on the toxic comment classification task (subtask 1) of the GermEval 2021 Shared Task. For this binary task, we propose three models: a German BERT transformer model; a multilayer perceptron, which was first trained in parallel on textual input and 14 additional linguistic features and then concatenated in an additional layer; and a multilayer perceptron with both feature types as input. We enhanced our pre-trained transformer model by re-training it with over 1 million tweets and fine-tuned it on two additional German datasets of similar tasks. The embeddings of the final fine-tuned German BERT were taken as the textual input features for our neural networks. Our best models on the validation data were both neural networks, however our enhanced German BERT gained with a F1-score = 0.5895 a higher prediction on the test data.
References used
https://aclanthology.org/
We report on our submission to Task 1 of the GermEval 2021 challenge -- toxic comment classification. We investigate different ways of bolstering scarce training data to improve off-the-shelf model performance on a toxic comment classification task.
In this paper we investigate the efficacy of using contextual embeddings from multilingual BERT and German BERT in identifying fact-claiming comments in German on social media. Additionally, we examine the impact of formulating the classification pro
In this paper, we report on our approach to addressing the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments for the German language. We submitted three runs for each subtask based on ensembles of three mo
The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years. Pretrained models usually differ i
This paper addresses the identification of toxic, engaging, and fact-claiming comments on social media. We used the dataset made available by the organizers of the GermEval2021 shared task containing over 3,000 manually annotated Facebook comments in