يحقق نماذج اللغة التعلم المستندة عميقا (DL) أداء عال في مختلف المعايير لاستدلال اللغة الطبيعية (NLI).وفي هذا الوقت، يتلقى النهج الرمزية ل NLI اهتماما أقل.كلا النهجين (الرمزي و DL) لديهم مزاياهم وموضعاتهم.ومع ذلك، حاليا، لا توجد طريقة تجمع بينها في نظام لحل مهمة NLI.لدمج أساليب التعلم الرمزي والعميقة، نقترح إطار استنتاجي يسمى NeuRallog، والذي يستخدم محرك الاستدلال المنطقي على حد سواء ونموذج لغة الشبكة العصبية لمحاذاة العبارة.نماذج إطار عملنا مهمة NLI كصورة بحث كلاسيكية وتستخدم خوارزمية البحث في شعاع البحث عن مسارات الاستدلال الأمثل.تظهر التجارب أن نظامنا المشترك ومنطق الاستدلال العصبي يحسن الدقة في مهمة NLI ويمكن أن تحقق دقة حديثة على مجموعات البيانات المريضة والمتوسطة.
Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their advantages and weaknesses. However, currently, no method combines them in a system to solve the task of NLI. To merge symbolic and deep learning methods, we propose an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment. Our framework models the NLI task as a classic search problem and uses the beam search algorithm to search for optimal inference paths. Experiments show that our joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy on the SICK and MED datasets.
References used
https://aclanthology.org/
We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game that learns logical semantic representations compositional
Natural Language Inference (NLI) has garnered significant attention in recent years; however, the promise of applying NLI breakthroughs to other downstream NLP tasks has remained unfulfilled. In this work, we use the multiple-choice reading comprehen
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential pr
Given an untrimmed video and a natural language query, Natural Language Video Localization (NLVL) aims to identify the video moment described by query. To address this task, existing methods can be roughly grouped into two groups: 1) propose-and-rank
Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning.