مع توفر المعلومات الرقمية المتزايدة بشكل متزايد، فإن المحتوى السام هو أيضا في ارتفاع. لذلك، فإن اكتشاف هذا النوع من اللغة له أهمية قصوى. نتعامل مع هذه المشكلة باستخدام مجموعة من نموذج اللغة المدربة مسبقا من أحدث (ManalBert) وتقنية حقيبة من الكلمات التقليدية. نظرا لأن المحتوى مليء بالكلمات السامة التي لم تتم كتابتها وفقا لإملاء القاموس، فإن الحضور للشخصيات الفردية أمر بالغ الأهمية. لذلك، نستخدم ManalBerT لاستخراج الميزات بناء على أحرف كلمة. يتكون من وحدة LiftCNN التي تتعلم تضمين الأحرف من السياق. هذه هي، إذن، تغذيها بنية بيرت المعروفة. طريقة حقيبة الكلمات، من ناحية أخرى، تتحسن كذلك على ذلك بالتأكد من أن بعض الكلمات السامة المستخدمة في كثير من الأحيان تسمى وفقا لذلك. مع اختلاف ~4 في المئة من الفريق الأول، احتل نظامنا المرتبة 36 في المسابقة. يتوفر الكود لمزيد من البحث واستكمال النتائج.
With the ever-increasing availability of digital information, toxic content is also on the rise. Therefore, the detection of this type of language is of paramount importance. We tackle this problem utilizing a combination of a state-of-the-art pre-trained language model (CharacterBERT) and a traditional bag-of-words technique. Since the content is full of toxic words that have not been written according to their dictionary spelling, attendance to individual characters is crucial. Therefore, we use CharacterBERT to extract features based on the word characters. It consists of a CharacterCNN module that learns character embeddings from the context. These are, then, fed into the well-known BERT architecture. The bag-of-words method, on the other hand, further improves upon that by making sure that some frequently used toxic words get labeled accordingly. With a ∼4 percent difference from the first team, our system ranked 36 th in the competition. The code is available for further research and reproduction of the results.
References used
https://aclanthology.org/
With the rapid growth in technology, social media activity has seen a boom across all age groups. It is humanly impossible to check all the tweets, comments and status manually whether they follow proper community guidelines. A lot of toxicity is reg
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints:
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,