Do you want to publish a course? Click here

Amrita@LT-EDI-EACL2021: Hope Speech Detection on Multilingual Text

Amrita @ LT-EDI-EACL2021: الكشف عن الكلام الأمل في النص متعدد اللغات

402   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

Analysis and deciphering code-mixed data is imperative in academia and industry, in a multilingual country like India, in order to solve problems apropos Natural Language Processing. This paper proposes a bidirectional long short-term memory (BiLSTM) with the attention-based approach, in solving the hope speech detection problem. Using this approach an F1-score of 0.73 (9thrank) in the Malayalam-English data set was achieved from a total of 31 teams who participated in the competition.



References used
https://aclanthology.org/
rate research

Read More

In this paper we work with a hope speech detection corpora that includes English, Tamil, and Malayalam datasets. We present a two phase mechanism to detect hope speech. In the first phase we build a classifier to identify the language of the text. In the second phase, we build a classifier to detect hope speech, non hope speech, or not lang labels. Experimental results show that hope speech detection is challenging and there is scope for improvement.
Language as a significant part of communication should be inclusive of equality and diversity. The internet user's language has a huge influence on peer users all over the world. People express their views through language on virtual platforms like F acebook, Twitter, YouTube etc. People admire the success of others, pray for their well-being, and encourage on their failure. Such inspirational comments are hope speech comments. At the same time, a group of users promotes discrimination based on gender, racial, sexual orientation, persons with disability, and other minorities. The current paper aims to identify hope speech comments which are very important to move on in life. Various machine learning and deep learning based models (such as support vector machine, logistics regression, convolutional neural network, recurrent neural network) are employed to identify the hope speech in the given YouTube comments. The YouTube comments are available in English, Tamil and Malayalam languages and are part of the task EACL-2021:Hope Speech Detection for Equality, Diversity and Inclusion''.
In this paper, we describe our approach towards utilizing pre-trained models for the task of hope speech detection. We participated in Task 2: Hope Speech Detection for Equality, Diversity and Inclusion at LT-EDI-2021 @ EACL2021. The goal of this tas k is to predict the presence of hope speech, along with the presence of samples that do not belong to the same language in the dataset. We describe our approach to fine-tuning RoBERTa for Hope Speech detection in English and our approach to fine-tuning XLM-RoBERTa for Hope Speech detection in Tamil and Malayalam, two low resource Indic languages. We demonstrate the performance of our approach on classifying text into hope-speech, non-hope and not-language. Our approach ranked 1st in English (F1 = 0.93), 1st in Tamil (F1 = 0.61) and 3rd in Malayalam (F1 = 0.83).
This paper mainly introduces the relevant content of the task Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI 2021-EACL 2021''. A total of three language datasets were provided, and we chose the English dataset to complete this task. The specific task objective is to classify the given speech into Hope speech', Not Hope speech', and Not in intended language'. In terms of method, we use fine-tuned ALBERT and K fold cross-validation to accomplish this task. In the end, we achieved a good result in the rank list of the task result, and the final F1 score was 0.93, tying for first place. However, we will continue to try to improve methods to get better results in future work.
This paper aims to describe the approach we used to detect hope speech in the HopeEDI dataset. We experimented with two approaches. In the first approach, we used contextual embeddings to train classifiers using logistic regression, random forest, SV M, and LSTM based models. The second approach involved using a majority voting ensemble of 11 models which were obtained by fine-tuning pre-trained transformer models (BERT, ALBERT, RoBERTa, IndicBERT) after adding an output layer. We found that the second approach was superior for English, Tamil and Malayalam. Our solution got a weighted F1 score of 0.93, 0.75 and 0.49 for English, Malayalam and Tamil respectively. Our solution ranked 1st in English, 8th in Malayalam and 11th in Tamil.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا