أظهرت نماذج اختيار الاستجابة متعددة الدوران مؤخرا أداء مماثل للبشر في العديد من البيانات القياسية.ومع ذلك، في البيئة الحقيقية، غالبا ما تحتوي هذه النماذج على نقاط ضعف، مثل اتباع تنبؤات غير صحيحة تستند بشكل كبير على الأنماط السطحية دون فهم شامل للسياق.على سبيل المثال، غالبا ما تعطي هذه النماذج درجات عالية مرشحة للاستجابة الخاطئة التي تحتوي على العديد من الكلمات الرئيسية المتعلقة بالسياق ولكن باستخدام المضارع غير المتناقص.في هذه الدراسة، نقوم بتحليل نقاط الضعف في نماذج اختيار استجابة الاستجابة الكورية من هذا المجال ونشر مجموعة بيانات الخصومة لتقييم هذه نقاط الضعف.نقترح أيضا استراتيجية لبناء نموذج قوي في هذه البيئة الخصومة.
Multi-turn response selection models have recently shown comparable performance to humans in several benchmark datasets. However, in the real environment, these models often have weaknesses, such as making incorrect predictions based heavily on superficial patterns without a comprehensive understanding of the context. For example, these models often give a high score to the wrong response candidate containing several keywords related to the context but using the inconsistent tense. In this study, we analyze the weaknesses of the open-domain Korean Multi-turn response selection models and publish an adversarial dataset to evaluate these weaknesses. We also suggest a strategy to build a robust model in this adversarial environment.
References used
https://aclanthology.org/
Recent development in NLP shows a strong trend towards refining pre-trained models with a domain-specific dataset. This is especially the case for response generation where emotion plays an important role. However, existing empathetic datasets remain
Despite the success of neural dialogue systems in achieving high performance on the leader-board, they cannot meet users' requirements in practice, due to their poor reasoning skills. The underlying reason is that most neural dialogue models only cap
Emotion inference in multi-turn conversations aims to predict the participant's emotion in the next upcoming turn without knowing the participant's response yet, and is a necessary step for applications such as dialogue planning. However, it is a sev
This paper presents StoryDB --- a broad multi-language dataset of narratives. StoryDB is a corpus of texts that includes stories in 42 different languages. Every language includes 500+ stories. Some of the languages include more than 20 000 stories.
Transformers that are pre-trained on multilingual corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer capabilities. In the zero-shot transfer setting, only English training data is used, and the fine-tuned model i