الاتساق الملخص للنموذج --- أي ثابت سلوكه بموجب استطلاعات المعنى المحفوظة في مدخلاته --- هو ممتلكات مرغوبة للغاية في معالجة اللغة الطبيعية.في هذه الورقة ندرس السؤال: نماذج اللغة المحددة مسبقا (PLMS) بما يتفق فيما يتعلق بالمعرفة الواقعية؟تحقيقا لهذه الغاية، نقوم بإنشاء Pararel?، وهو مورد عالي الجودة لاستعلام النمط الإنجليزي على الطراز على الطراز.أنه يحتوي على ما مجموعه 328 صالة لمدة 38 علامة.باستخدام pararel?، نوضح أن اتساق جميع اللقطات المقبلات التي نقوم بتجربةها سيئة --- على الرغم من وجود تباين كبير بين العلاقات.يقترح تحليلنا للمساحات التمثيلية لمحلات PLMS أن لديهم بنية سيئة ولا تكون مناسبة حاليا لتمثيل المعرفة بقوة.أخيرا، نقترح طريقة لتحسين الاتساق النموذجي وتظهر تجريبيا فعاليته
Abstract Consistency of a model---that is, the invariance of its behavior under meaning-preserving alternations in its input---is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel?, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using ParaRel?, we show that the consistency of all PLMs we experiment with is poor--- though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness.1
References used
https://aclanthology.org/
Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d
Taxonomies are symbolic representations of hierarchical relationships between terms or entities. While taxonomies are useful in broad applications, manually updating or maintaining them is labor-intensive and difficult to scale in practice. Conventio
In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels.