نحن نحقق في ما إذا كان هناك نموذج يمكن أن يتعلم اللغة الطبيعية مع الحد الأدنى من المدخلات اللغوية من خلال التفاعل.معالجة هذا السؤال، نقوم بتصميم وتنفيذ لعبة تعليمية تفاعلية تتعلم التمثيلات الدلالية المنطقية تكوين.تتيح لنا لعبتنا استكشاف فوائد الاستدلال المنطقي لتعلم اللغة الطبيعية.يوضح التقييم أن النموذج يمكن أن يضيق بدقة التمثيلات المنطقية المحتملة للكلمات على مدار اللعبة، مما يشير إلى أن نموذجنا قادر على تعلم تعيينات معجمية من الصفر بنجاح.
We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game that learns logical semantic representations compositionally. Our game allows us to explore the benefits of logical inference for natural language learning. Evaluation shows that the model can accurately narrow down potential logical representations for words over the course of the game, suggesting that our model is able to learn lexical mappings from scratch successfully.
References used
https://aclanthology.org/
Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their adva
Formal semantics in the Montagovian tradition provides precise meaning characterisations, but usually without a formal theory of the pragmatics of contextual parameters and their sensitivity to background knowledge. Meanwhile, formal pragmatic theori
This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action l
We present an interactive Plotting Agent, a system that enables users to directly manipulate plots using natural language instructions within an interactive programming environment. The Plotting Agent maps language to plot updates. We formulate this
Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning.