التحليل والكشف عن البيانات المختلطة من الكود أمر حتمي في الأوساط الأكاديمية والصناعة، في بلد متعدد اللغات مثل الهند، من أجل حل المشاكل معالجة اللغة الطبيعية في Apropos.تقترح هذه الورقة ذاكرة قصيرة الأجل الطويلة الأجل (Bilstm) مع النهج القائم على الاهتمام، في حل مشكلة الكشف عن الكلام الأمل.باستخدام هذا النهج، تم تحقيق نتيجة F1 من 0.73 (9 أنثو) في مجموعة بيانات ملليالامية - من بين ما مجموعه 31 فريقا شاركت في المسابقة.
Analysis and deciphering code-mixed data is imperative in academia and industry, in a multilingual country like India, in order to solve problems apropos Natural Language Processing. This paper proposes a bidirectional long short-term memory (BiLSTM) with the attention-based approach, in solving the hope speech detection problem. Using this approach an F1-score of 0.73 (9thrank) in the Malayalam-English data set was achieved from a total of 31 teams who participated in the competition.
References used
https://aclanthology.org/
In this paper we work with a hope speech detection corpora that includes English, Tamil, and Malayalam datasets. We present a two phase mechanism to detect hope speech. In the first phase we build a classifier to identify the language of the text. In
Language as a significant part of communication should be inclusive of equality and diversity. The internet user's language has a huge influence on peer users all over the world. People express their views through language on virtual platforms like F
In this paper, we describe our approach towards utilizing pre-trained models for the task of hope speech detection. We participated in Task 2: Hope Speech Detection for Equality, Diversity and Inclusion at LT-EDI-2021 @ EACL2021. The goal of this tas
This paper mainly introduces the relevant content of the task Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021''. A total of three language datasets were provided, and we chose the English dataset to complete this
This paper aims to describe the approach we used to detect hope speech in the HopeEDI dataset. We experimented with two approaches. In the first approach, we used contextual embeddings to train classifiers using logistic regression, random forest, SV