ينقل الناس نيتهم وموقفهم من خلال الأساليب اللغوية للنص الذي يكتبونه. في هذه الدراسة، نقوم بتحقيق كملات المعجم في المعجم عبر الأساليب طوال العدسين: الإدراك البشري وأهمية كلمة الجهاز، لأن الكلمات تختلف في قوة الإشارات الأسلوبية التي تقدمها. لجمع ملصقات التصور البشري، فإننا نرفع مجموعة بيانات جديدة وطنانيرد، على رأس مجموعات بيانات النمط القياسي. لدينا عمال الحشد يسلط الضوء على الكلمات التمثيلية في النص الذي يجعلهم يعتقدون أن النص لديه الأنماط التالية: المداراة والشعور والتهدفة وخمس أنواع العاطفة. بعد ذلك بمقارنة هذه الملصقات البشرية هذه ذات أهمية نصية مشتقة من مصنف ذو طراز ذو ضبط صقل شهير مثل بيرت. تظهر نتائجنا أن بيرتف غالبا ما يجد كلمات المحتوى غير ذات صلة بالأناقة المستهدفة ككلمات مهمة تستخدم في التنبؤ بالأناقة، لكن البشر لا ينظرون بنفس الطريقة على الرغم من أن بعض الأساليب (مثل الشعور والإيجابي والفرح) الإنسان والجهاز الكلمات المحددة تشترك في تداخل كبير لبعض الأساليب.
People convey their intention and attitude through linguistic styles of the text that they write. In this study, we investigate lexicon usages across styles throughout two lenses: human perception and machine word importance, since words differ in the strength of the stylistic cues that they provide. To collect labels of human perception, we curate a new dataset, Hummingbird, on top of benchmarking style datasets. We have crowd workers highlight the representative words in the text that makes them think the text has the following styles: politeness, sentiment, offensiveness, and five emotion types. We then compare these human word labels with word importance derived from a popular fine-tuned style classifier like BERT. Our results show that the BERT often finds content words not relevant to the target style as important words used in style prediction, but humans do not perceive the same way even though for some styles (e.g., positive sentiment and joy) human- and machine-identified words share significant overlap for some styles.
References used
https://aclanthology.org/
Understanding idioms is important in NLP. In this paper, we study to what extent pre-trained BERT model can encode the meaning of a potentially idiomatic expression (PIE) in a certain context. We make use of a few existing datasets and perform two pr
In machine reading comprehension tasks, a model must extract an answer from the available context given a question and a passage. Recently, transformer-based pre-trained language models have achieved state-of-the-art performance in several natural la
The paper describes the MilaNLP team's submission (Bocconi University, Milan) in the WASSA 2021 Shared Task on Empathy Detection and Emotion Classification. We focus on Track 2 - Emotion Classification - which consists of predicting the emotion of re
Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words -- either through masking or next sentence prediction -- and has no kn
The need to deploy large-scale pre-trained models on edge devices under limited computational resources has led to substantial research to compress these large models. However, less attention has been given to compress the task-specific models. In th