في هذه الورقة نعمل مع كورسيا الكشف عن الكلام تتضمن مجموعات بيانات اللغة الإنجليزية والتاميل والمالياالام.نقدم آلية مرحلتين لاكتشاف خطاب الأمل.في المرحلة الأولى، نبني مصنف لتحديد لغة النص.في المرحلة الثانية، نبني مصنف للكشف عن خطاب الأمل أو الكلام غير الأمل أو لا تانج.تظهر النتائج التجريبية أن الكشف عن الكلام الأمل صعبة وهناك مجال للتحسين.
In this paper we work with a hope speech detection corpora that includes English, Tamil, and Malayalam datasets. We present a two phase mechanism to detect hope speech. In the first phase we build a classifier to identify the language of the text. In the second phase, we build a classifier to detect hope speech, non hope speech, or not lang labels. Experimental results show that hope speech detection is challenging and there is scope for improvement.
References used
https://aclanthology.org/
Analysis and deciphering code-mixed data is imperative in academia and industry, in a multilingual country like India, in order to solve problems apropos Natural Language Processing. This paper proposes a bidirectional long short-term memory (BiLSTM)
Language as a significant part of communication should be inclusive of equality and diversity. The internet user's language has a huge influence on peer users all over the world. People express their views through language on virtual platforms like F
In this paper, we describe our approach towards utilizing pre-trained models for the task of hope speech detection. We participated in Task 2: Hope Speech Detection for Equality, Diversity and Inclusion at LT-EDI-2021 @ EACL2021. The goal of this tas
This paper aims to describe the approach we used to detect hope speech in the HopeEDI dataset. We experimented with two approaches. In the first approach, we used contextual embeddings to train classifiers using logistic regression, random forest, SV
This paper mainly introduces the relevant content of the task Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021''. A total of three language datasets were provided, and we chose the English dataset to complete this