أثار العمل الحديث مخاوف بشأن القيود المتأصلة للاحتجاج بالنص. في هذه الورقة، نوضح أولا أن الإبلاغ عن التحيز، ميل لا يذكر أن الواضح، هو أحد أسباب هذا القيد، ثم التحقيق في أي مدى يمكن للتدريب المتعدد الوسائط تخفيف هذه المشكلة. لإنجاز هذا، نحن 1) إنشاء مجموعة بيانات اللون (CODA)، مجموعة بيانات من توزيعات الألوان التي طالبي الإنسان 521 كائنات مشتركة؛ 2) استخدم Coda لتحليل ومقارنة توزيع الألوان الموجود في النص، والتوزيع الذي تم التقاطه بواسطة نماذج اللغة، وتصور الإنسان للون؛ و 3) التحقيق في اختلافات الأداء بين النماذج النصية فقط والنماذج متعددة الوسائط على CODA. تظهر نتائجنا أن توزيع الألوان التي يتعافها نموذج اللغة تعاد ترتبط بقوة أكبر بتوزيع غير دقيق موجود في نصا أكثر من الحقيقة الأرضية، مما يدعم الادعاء بأن الإبلاغ عن التحيز يؤثر سلبا على تدريب سلبي ويحد تدريبا بطبيعته على التدريب فقط. ثم نوضح أن النماذج متعددة الوسائط يمكن أن تستفيد من التدريب البصري لتخفيف هذه الآثار، مما يوفر وسيلة واعدة للبحث في المستقبل.
Recent work has raised concerns about the inherent limitations of text-only pretraining. In this paper, we first demonstrate that reporting bias, the tendency of people to not state the obvious, is one of the causes of this limitation, and then investigate to what extent multimodal training can mitigate this issue. To accomplish this, we 1) generate the Color Dataset (CoDa), a dataset of human-perceived color distributions for 521 common objects; 2) use CoDa to analyze and compare the color distribution found in text, the distribution captured by language models, and a human's perception of color; and 3) investigate the performance differences between text-only and multimodal models on CoDa. Our results show that the distribution of colors that a language model recovers correlates more strongly with the inaccurate distribution found in text than with the ground-truth, supporting the claim that reporting bias negatively impacts and inherently limits text-only training. We then demonstrate that multimodal models can leverage their visual training to mitigate these effects, providing a promising avenue for future research.
References used
https://aclanthology.org/
The problem of interpretation of knowledge learned by multi-head self-attention in transformers has been one of the central questions in NLP. However, a lot of work mainly focused on models trained for uni-modal tasks, e.g. machine translation. In th
It is widely accepted that fine-tuning pre-trained language models usually brings about performance improvements in downstream tasks. However, there are limited studies on the reasons behind this effectiveness, particularly from the viewpoint of stru
Abstract Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask
Quality of health services in Reproductive Health is one of the
key stones that leads to increase utility of these services by women specially
in the field of Family planning. The percentage of women in Syria with
unmet needs as regard contracepti
Quality of health services in Reproductive Health is one of the
key stones that leads to increase utility of these services by women specially
in the field of Family planning. The percentage of women in Syria with
unmet needs as regard contracepti