أدت طرازات اللغة الكبيرة المدربة مسبقا (PLMS) إلى نجاح كبير في مهام الإجابة على الأسئلة المختلفة (QA) في أزياء نهاية إلى نهاية.ومع ذلك، تم إيلاء القليل من الاهتمام وفقا لمعرفة المعرفة المنطقية لتمييز مهام ضمان الجودة هذه.في هذا العمل، اقترحنا تصنيف الدلالات اللازمة لهذه المهام باستخدام SocialIQA كمثال.بناء على فئات المعرفة الاجتماعية الخاصة بنا المسمى DataSet على رأس SocialiQa، نربط نماذج QA العصبية لدمج فئات المعرفة الاجتماعية هذه ومعلومات العلاقة بين قاعدة المعرفة.على عكس العمل السابق، نلاحظ نماذجنا مع تصنيفات دلالية للمعرفة الاجتماعية يمكن أن تحقق أداء مماثل مع نموذج بسيط نسبيا وحجم أصغر مقارنة بالمناهج المعقدة الأخرى.
Large pre-trained language models (PLMs) have led to great success on various commonsense question answering (QA) tasks in an end-to-end fashion. However, little attention has been paid to what commonsense knowledge is needed to deeply characterize these QA tasks. In this work, we proposed to categorize the semantics needed for these tasks using the SocialIQA as an example. Building upon our labeled social knowledge categories dataset on top of SocialIQA, we further train neural QA models to incorporate such social knowledge categories and relation information from a knowledge base. Unlike previous work, we observe our models with semantic categorizations of social knowledge can achieve comparable performance with a relatively simple model and smaller size compared to other complex approaches.
References used
https://aclanthology.org/
We tackle multi-choice question answering. Acquiring related commonsense knowledge to the question and options facilitates the recognition of the correct answer. However, the current reasoning models suffer from the noises in the retrieved knowledge.
Natural Language Inference (NLI) has garnered significant attention in recent years; however, the promise of applying NLI breakthroughs to other downstream NLP tasks has remained unfulfilled. In this work, we use the multiple-choice reading comprehen
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access train
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a