أظهرت الأساليب الحديثة بناء على نماذج اللغة المدربين مسبقا أداء مشغل قوي على المنطق المنطقي.ومع ذلك، فإنها تعتمد على شروح بيانات باهظة الثمن والتدريب المستهلكة للوقت.وهكذا، نحن نركز على التفكير المنطقي غير المنشأ.نظهر فعالية استخدام إطار عمل مشترك، استنتاج اللغة الطبيعية (NLI)، لحل مهام المنطق المنطقي متنوعة.من خلال الاستفادة من نقل التحويلات من مجموعات بيانات NLI الكبيرة، وحقن المعرفة الحاسمة من مصادر المنطقية مثل 2020 والفهول الذرية، حققت طريقنا أداء غير مدهز للحالة غير المدرجة في مهمتين منطقتي المنطقية: Winowhy و Commonsenseqa.أظهر إجراء مزيد من التحليل فوائد فئات متعددة من المعرفة، ولكن مشاكل حول الكميات والمتضادات لا تزال تحديا.
Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning. We show the effectiveness of using a common framework, Natural Language Inference (NLI), to solve diverse commonsense reasoning tasks. By leveraging transfer learning from large NLI datasets, and injecting crucial knowledge from commonsense sources such as ATOMIC 2020 and ConceptNet, our method achieved state-of-the-art unsupervised performance on two commonsense reasoning tasks: WinoWhy and CommonsenseQA. Further analysis demonstrated the benefits of multiple categories of knowledge, but problems about quantities and antonyms are still challenging.
References used
https://aclanthology.org/
Natural language inference (NLI) is the task of determining whether a piece of text is entailed, contradicted by or unrelated to another piece of text. In this paper, we investigate how to tease systematic inferences (i.e., items for which people agr
Pre-trained language models have led to substantial gains over a broad range of natural language processing (NLP) tasks, but have been shown to have limitations for natural language generation tasks with high-quality requirements on the output, such
Commonsense inference to understand and explain human language is a fundamental research problem in natural language processing. Explaining human conversations poses a great challenge as it requires contextual understanding, planning, inference, and
Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their adva
Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence. Recently, much progress toward this goal has been made with automatic knowledge base (KB) construction techniques. However, such techniques focus primarily