في هذه المهمة المشتركة، تقترح هذه الورقة طريقة للجمع بين نموذج ناقلات Word القائم على BERT ومقدمة تنبؤ LSTM للتنبؤ بقيم التكافؤ والإثارة في النص.من بينها، ناقل الكلمات المستند إلى بيرت هو 768 ثيم، ويتم تغذية كل ناقلات كلمة في الجملة بالتتابع لطراز LSTM للتنبؤ.تظهر النتائج التجريبية أن أداء طريقة لدينا المقترحة أفضل من نتائج نموذج الانحدار لاسو.
In this shared task, this paper proposes a method to combine the BERT-based word vector model and the LSTM prediction model to predict the Valence and Arousal values in the text. Among them, the BERT-based word vector is 768-dimensional, and each word vector in the sentence is sequentially fed to the LSTM model for prediction. The experimental results show that the performance of our proposed method is better than the results of the Lasso Regression model.
References used
https://aclanthology.org/
This paper present a description for the ROCLING 2021 shared task in dimensional sentiment analysis for educational texts. We submitted two runs in the final test. Both runs use the standard regression model. The Run1 uses Chinese version of BERT as
This paper presents the ROCLING 2021 shared task on dimensional sentiment analysis for educational texts which seeks to identify a real-value sentiment score of self-evaluation comments written by Chinese students in the both valence and arousal dime
We use the MacBERT transformers and fine-tune them to ROCLING-2021 shared tasks using the CVAT and CVAS data. We compare the performance of MacBERT with the other two transformers BERT and RoBERTa in the valence and arousal dimensions, respectively.
This technical report aims at the ROCLING 2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts. In order to predict the affective states of Chinese educational texts, we present a practical framework by employing pre-trained languag
Irony and Sentiment detection is important to understand people's behavior and thoughts. Thus it has become a popular task in natural language processing (NLP). This paper presents results and main findings in WANLP 2021 shared tasks one and two. The