نقوم بتحليل تغيير اللغة بمرور الوقت في مهمة تعليمية تعاونية وموجهة نحو تحقيق الأهداف، حيث تكثف المرافق تعظيم المشاركين في الاتفاقيات وزيادة خبراتهم.درس العمل المسبق مثل هذه السيناريوهات في الغالب في سياق الألعاب المرجعية، ووجدت باستمرار أن تعقيد اللغة يتم تقليلها على طول أبعاد متعددة، مثل طول الكلام، مع تشكيل الاتفاقيات.على النقيض من ذلك، نجد أنه نظرا للقدرة على زيادة المرافق التعليمية، يقوم المدربون بزيادة تعقيد اللغة على طول هذه الأبعاد التي تمت دراستها سابقا للتعاون بشكل أفضل مع أتباع تعليمات ماهرة بشكل متزايد.
We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers.
References used
https://aclanthology.org/
Understanding and executing natural language instructions in a grounded domain is one of the hallmarks of artificial intelligence. In this paper, we focus on instruction understanding in the blocks world domain and investigate the language understand
Abstract We study continual learning for natural language instruction generation, by observing human users' instruction execution. We focus on a collaborative scenario, where the system both acts and delegates tasks to human users using natural langu
Standard architectures used in instruction following often struggle on novel compositions of subgoals (e.g. navigating to landmarks or picking up objects) observed during training. We propose a modular architecture for following natural language inst
Transformer architecture has become ubiquitous in the natural language processing field. To interpret the Transformer-based models, their attention patterns have been extensively analyzed. However, the Transformer architecture is not only composed of
The span-based model enjoys great popularity in recent works of sequence segmentation. However, each of these methods suffers from its own defects, such as invalid predictions. In this work, we introduce a unified span-based model, lexical unit analy