يركز البحث في مجال المنطق الحالي على تطوير النماذج التي تستخدم معرفة المنطقية للإجابة على أسئلة متعددة الخيارات. ومع ذلك، قد لا تكون النظم المصممة للإجابة على أسئلة متعددة الخيارات مفيدة في التطبيقات التي لا توفر قائمة صغيرة من إجابات المرشحين للاختيار من بينها. كخطوة نحو جعل البحث منطق المنطقي أكثر واقعية، نقترح دراسة مسطحة المنطقية المفتوحة العضوية (OPENCSR) --- مهمة الإجابة على سؤال المنطقي دون أي اختيارات محددة مسبقا --- استخدام كموارد فقط حقائق المنطقية مكتوبة باللغة الطبيعية. OpenCSR تحديا بسبب مساحة قرارات كبيرة، ولأن العديد من الأسئلة تتطلب منطق متعدد القفز الضمني. كنعجا من OpenCSR، نقترح نماذج شديدة الفضلة للمناسبة متعددة القفز بشأن حقائق المعرفة. لتقييم أساليب OpenCSR، نقوم بتكييف العديد من معايير المنطق المنطقية الشائعة، وجمع إجابات جديدة متعددة لكل سؤال اختبار عبر مصادر الحشد. تظهر التجارب أن DrFact تفوق أساليب أساسية قوية من قبل هامش كبير.
Current commonsense reasoning research focuses on developing models that use commonsense knowledge to answer multiple-choice questions. However, systems designed to answer multiple-choice questions may not be useful in applications that do not provide a small list of candidate answers to choose from. As a step towards making commonsense reasoning research more realistic, we propose to study open-ended commonsense reasoning (OpenCSR) --- the task of answering a commonsense question without any pre-defined choices --- using as a resource only a corpus of commonsense facts written in natural language. OpenCSR is challenging due to a large decision space, and because many questions require implicit multi-hop reasoning. As an approach to OpenCSR, we propose DrFact, an efficient Differentiable model for multi-hop Reasoning over knowledge Facts. To evaluate OpenCSR methods, we adapt several popular commonsense reasoning benchmarks, and collect multiple new answers for each test question via crowd-sourcing. Experiments show that DrFact outperforms strong baseline methods by a large margin.
References used
https://aclanthology.org/
Recent text generation research has increasingly focused on open-ended domains such as story and poetry generation. Because models built for such tasks are difficult to evaluate automatically, most researchers in the space justify their modeling choi
Commonsense is a quintessential human capacity that has been a core challenge to Artificial Intelligence since its inception. Impressive results in Natural Language Processing tasks, including in commonsense reasoning, have consistently been achieved
Pre-trained language models have led to substantial gains over a broad range of natural language processing (NLP) tasks, but have been shown to have limitations for natural language generation tasks with high-quality requirements on the output, such
Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicit in text. In this work, we propose an ensemble model for temporal commonsense reasoning. Our model relies on pre-trained contextual representat
Commonsense inference to understand and explain human language is a fundamental research problem in natural language processing. Explaining human conversations poses a great challenge as it requires contextual understanding, planning, inference, and