تدابير التشابه هي أداة حيوية لفهم كيف تمثل النماذج اللغوية ولغة العملية. تم استخدام تدابير التشابه التمثيلية القياسية مثل تشابه التموين وجيب التغليح ومسافة Euclidean بنجاح في نماذج تضمين كلمة ثابتة لفهم كيفية الكتلة الكلمات في الفضاء الدلالي. في الآونة الأخيرة، تم تطبيق هذه التدابير على المدينات من النماذج السياقية مثل بيرت و GPT-2. في هذا العمل، ندعو إلى السؤال عن المعلوماتية لهذه التدابير لنماذج اللغة السياقية. نجد أن عددا صغيرا من الأبعاد المارقة، في كثير من الأحيان 1-3، يهيمن على هذه التدابير. علاوة على ذلك، نجد عدم تطابق مذهل بين الأبعاد التي تهيمن على تدابير التشابه والذين مهمون سلوك النموذج. نظهر أن تقنيات PostProcessing البسيطة مثل التقييس قادرة على تصحيح الأبعاد المارقة وكشف عن الجودة التمثيلية الكامنة. نقول أن المحاسبة للأبعاد المارقة أمر ضروري لأي تحليل مقرها في التشابه لنماذج اللغة السياقية.
Similarity measures are a vital tool for understanding how language models represent and process language. Standard representational similarity measures such as cosine similarity and Euclidean distance have been successfully used in static word embedding models to understand how words cluster in semantic space. Recently, these measures have been applied to embeddings from contextualized models such as BERT and GPT-2. In this work, we call into question the informativity of such measures for contextualized language models. We find that a small number of rogue dimensions, often just 1-3, dominate these measures. Moreover, we find a striking mismatch between the dimensions that dominate similarity measures and those which are important to the behavior of the model. We show that simple postprocessing techniques such as standardization are able to correct for rogue dimensions and reveal underlying representational quality. We argue that accounting for rogue dimensions is essential for any similarity-based analysis of contextual language models.
References used
https://aclanthology.org/
We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations in-comparison with the lower an
The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models. We resolve this by showing, contrary to previous studies, that the represe
While vector-based language representations from pretrained language models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-l
Representation learning for text via pretraining a language model on a large corpus has become a standard starting point for building NLP systems. This approach stands in contrast to autoencoders, also trained on raw text, but with the objective of l
Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world; to this end, pr