BertScore، متري التلقائي المقترح مؤخرا لجودة الترجمة الآلية، تستخدم بيرت، نموذج لغة كبير مدرب مسبقا لتقييم ترجمات المرشحين فيما يتعلق بالترجمة الذهبية. الاستفادة من قدرات بيرت الدلالية والنزالية، تسعى BertScore إلى تجنب عيوب النهج السابقة مثل بلو، بدلا من ذلك تسجيل ترجمات المرشحين بناء على تشابههم الدلالي لحكم الذهب. ومع ذلك، بيرت ليست معصومة؛ في حين أن أدائها في مهام NLP، حددت حالة من الفن الجديد بشكل عام، فقد أظهرت دراسات ذات ظواهر النحوية والدلية المحددة أين ينحرف أداء بيرت عن حالة البشر بشكل عام. هذا يثير بشكل طبيعي الأسئلة التي نعالجها في هذه الورقة: ما هي نقاط القوة والضعف في BertScore؟ هل يرتبطون بالضعف المعروفين من جانب بيرت؟ نجد أنه في حين أن BertScore يمكن أن يكتشف عندما يختلف المرشح عن مرجع في كلمات محتوى مهمة، فهو أقل حساسية للأخطاء الأصغر، خاصة إذا كان المرشح مشابه بشكل جذري أو بشكل مصمم مرجع.
BERTScore, a recently proposed automatic metric for machine translation quality, uses BERT, a large pre-trained language model to evaluate candidate translations with respect to a gold translation. Taking advantage of BERT's semantic and syntactic abilities, BERTScore seeks to avoid the flaws of earlier approaches like BLEU, instead scoring candidate translations based on their semantic similarity to the gold sentence. However, BERT is not infallible; while its performance on NLP tasks set a new state of the art in general, studies of specific syntactic and semantic phenomena have shown where BERT's performance deviates from that of humans more generally. This naturally raises the questions we address in this paper: what are the strengths and weaknesses of BERTScore? Do they relate to known weaknesses on the part of BERT? We find that while BERTScore can detect when a candidate differs from a reference in important content words, it is less sensitive to smaller errors, especially if the candidate is lexically or stylistically similar to the reference.
References used
https://aclanthology.org/
Broad-coverage meaning representations in NLP mostly focus on explicitly expressed content. More importantly, the scarcity of datasets annotating diverse implicit roles limits empirical studies into their linguistic nuances. For example, in the web r
Existing text classification methods mainly focus on a fixed label set, whereas many real-world applications require extending to new fine-grained classes as the number of samples per label increases. To accommodate such requirements, we introduce a
Despite the success of neural dialogue systems in achieving high performance on the leader-board, they cannot meet users' requirements in practice, due to their poor reasoning skills. The underlying reason is that most neural dialogue models only cap
Current abusive language detection systems have demonstrated unintended bias towards sensitive features such as nationality or gender. This is a crucial issue, which may harm minorities and underrepresented groups if such systems were integrated in r
Deep neural networks have constantly pushed the state-of-the-art performance in natural language processing and are considered as the de-facto modeling approach in solving complex NLP tasks such as machine translation, summarization and question-answ