يستخدم البشر منطق المنطقي (CSR) ضمنيا لإنتاج ردود طبيعية ومتماسكة في المحادثات. تهدف إلى إغلاق الفجوة بين نماذج جيل الاستجابة الحالية (RG) قدرات الاتصالات البشرية، نريد أن نفهم لماذا تستجيب نماذج RG أثناء قيامهم بتحقيق فهم نموذج RG للمنطق المنطقي الذي يثير الاستجابات المناسبة. نحن نقوم بإضفاء الطابع الرسمي على المشكلة عن طريق تأطير العمولة كمتغير كامن في مهمة RG واستخدام توضيحات للاستجابات كأشكال نصية من العمليات النصية. نجمع 6K تفسيرات مشروحة تبرر الردود من أربعة مجموعات من مجموعات بيانات الحوار ونسأل البشر للتحقق منها واقتراح اثنين من إعدادات التحقيق لتقييم قدرات CSR نماذج RG. تظهر النتائج التحقيق أن النماذج تفشل في التقاط العلاقات المنطقية بين تفسيرات والردود المنطقية والضبط بشكل جيد على البيانات داخل المجال والأحجام النموذجية المتزايدة لا تؤدي إلى فهم المسؤولية الاجتماعية للشركات ل RG. نأمل أن تقوم دراستنا بحفز المزيد من الأبحاث في جعل نماذج RG محاكاة عملية التفكير البشرية في السعي لتحقيق اتصال ناعم للإنسان العربي.
Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations. Aiming to close the gap between current response generation (RG) models and human communication abilities, we want to understand why RG models respond as they do by probing RG model's understanding of commonsense reasoning that elicits proper responses. We formalize the problem by framing commonsense as a latent variable in the RG task and using explanations for responses as textual form of commonsense. We collect 6k annotated explanations justifying responses from four dialogue datasets and ask humans to verify them and propose two probing settings to evaluate RG models' CSR capabilities. Probing results show that models fail to capture the logical relations between commonsense explanations and responses and fine-tuning on in-domain data and increasing model sizes do not lead to understanding of CSR for RG. We hope our study motivates more research in making RG models emulate the human reasoning process in pursuit of smooth human-AI communication.
References used
https://aclanthology.org/
For a computer to naturally interact with a human, it needs to be human-like. In this paper, we propose a neural response generation model with multi-task learning of generation and classification, focusing on emotion. Our model based on BART (Lewis
Commonsense inference to understand and explain human language is a fundamental research problem in natural language processing. Explaining human conversations poses a great challenge as it requires contextual understanding, planning, inference, and
Large-scale auto-regressive models have achieved great success in dialogue response generation, with the help of Transformer layers. However, these models do not learn a representative latent space of the sentence distribution, making it hard to cont
In open-domain dialogue response generation, a dialogue context can be continued with diverse responses, and the dialogue models should capture such one-to-many relations. In this work, we first analyze the training objective of dialogue models from
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discr