نحن نبحث في التمثيلات التي تعلمناها عن طريق الرؤية ونماذج اللغة في المهام التي تتطلب التفكير العلائقي.مع التركيز على مشكلة تقييم الحجم النسبي للكائنات في السياقات البصرية مجردة، نحلل منطق واحد وخطوتين.بالنسبة لهذا الأخير، نبني مجموعة بيانات جديدة من مشاهد ثلاثية وتحدد مهمة تتطلب منطق على مستوى الصور الفردية وعبر الصور في مشهد.نحن نبذل تمثيلات النموذج المستفادة باستخدام مصنفات التشخيص.تظهر تجاربنا أن الهندسة المعاد المسبدة مسبقا القائمة على المحولات يمكن أن تؤدي من التفكير العلائقي المستوى الأعلى، وهي قادرة على تعلم تمثيلات المهام والبيانات الجديدة التي تختلف عن ما شوهد في الاحتجاج.
We investigate the representations learned by vision and language models in tasks that require relational reasoning. Focusing on the problem of assessing the relative size of objects in abstract visual contexts, we analyse both one-step and two-step reasoning. For the latter, we construct a new dataset of three-image scenes and define a task that requires reasoning at the level of the individual images and across images in a scene. We probe the learned model representations using diagnostic classifiers. Our experiments show that pretrained multimodal transformer-based architectures can perform higher-level relational reasoning, and are able to learn representations for novel tasks and data that are very different from what was seen in pretraining.
References used
https://aclanthology.org/
Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing. Here, we explore the use of unstructured external knowledge sources of image
Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces. Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured. Build
Multi-modal machine translation (MMT) aims at improving translation performance by incorporating visual information. Most of the studies leverage the visual information through integrating the global image features as auxiliary input or decoding by a
Neural machine translation based on bilingual text with limited training data suffers from lexical diversity, which lowers the rare word translation accuracy and reduces the generalizability of the translation system. In this work, we utilise the mul
Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the textual analysis, joint extracting both aspect terms and sentiment polarities has