اجتذبت نجاح نماذج اللغة السياقية واسعة النطاق اهتماما كبيرا بتحقيق ما يتم ترميزه في تمثيلاتهم.في هذا العمل، نعتبر سؤالا جديدا: إلى أي مدى يتم محاذاة تمثيل السياق للأسماء الخرسانية مع التمثيلات المرئية المقابلة؟نقوم بتصميم نموذج التحقيق الذي يقيم مدى فعالية تميز النصوص النصية فقط في التمييز بين مطابقة العروض المرئية غير المطابقة.تظهر النتائج الخاصة بنا أن تمثيلات اللغة وحدها توفر إشارة قوية لاسترداد تصحيحات الصورة من فئات الكائنات الصحيحة.علاوة على ذلك، فهي فعالة في استرداد حالات محددة من بقع الصور؛يلعب السياق النصي دورا مهما في هذه العملية.نماذج اللغة الترطفة بصريا تتفوق قليلا على نماذج اللغة النصية فقط في حالة استرجاع مثيل، ولكن تحت أداء البشر بشكل كبير.نأمل أن تلهم تحليلاتنا بالبحث في المستقبل في فهم وتحسين القدرات البصرية لنماذج اللغة.
The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with corresponding visual representations? We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations. Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories. Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role in this process. Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans. We hope our analyses inspire future research in understanding and improving the visual capabilities of language models.
References used
https://aclanthology.org/
Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level
Pretrained language models (PTLMs) yield state-of-the-art performance on many natural language processing tasks, including syntax, semantics and commonsense. In this paper, we focus on identifying to what extent do PTLMs capture semantic attributes a
Pre-trained multilingual language models have become an important building block in multilingual Natural Language Processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge acr
Relational knowledge bases (KBs) are commonly used to represent world knowledge in machines. However, while advantageous for their high degree of precision and interpretability, KBs are usually organized according to manually-defined schemas, which l
We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations in-comparison with the lower an