تتميز خطاب متماسك من مجرد مجموعة من الكلام من خلال إرضاء مجموعة متنوعة من القيود، على سبيل المثال اختيار التعبير والعلاقة المنطقية بين الأحداث المعلقة والتوافق الضمني مع المعرفة العالمية.هل ترمز نماذج اللغة العصبية هذه القيود؟نقوم بتصميم مجموعة قابلة للتمديد من أجنحة الاختبار التي تتناول جوانب مختلفة من الخطاب والتماسك الحوار.على عكس معظم دراسات تقييم التماسك السابق، فإننا نتعامل مع الأجهزة اللغوية المحددة وراء اضطرابات أمر الجملة، والتي تسمح بتحليل أكثر غرامة لما يشكل الاتساق وما هي النماذج العصبية المدربة على هدف نمذجة اللغة قادرة على الترميز.تمديد نموذج التقييم المستهدف لنماذج اللغة العصبية (مارفين ولينزن، 2018) إلى الظواهر بعد بناء الجملة، نظين على أن هذا النموذج مناسب بنفس القدر لتقييم الصفات اللغوية التي تسهم في مفهوم التماسك.
Coherent discourse is distinguished from a mere collection of utterances by the satisfaction of a diverse set of constraints, for example choice of expression, logical relation between denoted events, and implicit compatibility with world-knowledge. Do neural language models encode such constraints? We design an extendable set of test suites addressing different aspects of discourse and dialogue coherence. Unlike most previous coherence evaluation studies, we address specific linguistic devices beyond sentence order perturbations, which allow for a more fine-grained analysis of what constitutes coherence and what neural models trained on a language modelling objective are capable of encoding. Extending the targeted evaluation paradigm for neural language models (Marvin and Linzen, 2018) to phenomena beyond syntax, we show that this paradigm is equally suited to evaluate linguistic qualities that contribute to the notion of coherence.
References used
https://aclanthology.org/
Language models used in speech recognition are often either evaluated intrinsically using perplexity on test data, or extrinsically with an automatic speech recognition (ASR) system. The former evaluation does not always correlate well with ASR perfo
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the
This paper investigates whether the power of the models pre-trained on text data, such as BERT, can be transferred to general token sequence classification applications. To verify pre-trained models' transferability, we test the pre-trained models on
In this study, we propose a self-supervised learning method that distils representations of word meaning in context from a pre-trained masked language model. Word representations are the basis for context-aware lexical semantics and unsupervised sema
Language models are generally trained on short, truncated input sequences, which limits their ability to use discourse-level information present in long-range context to improve their predictions. Recent efforts to improve the efficiency of self-atte