مجردة مؤخرا، اكتسبت نماذج محولات متعددة الوسائط شعبية لأن أدائها على المهام المصب التي تشير إلى أنهم يتعلمون تمثيلات غنية بصرية لغوية.مع التركيز على مهام استرجاع الصور صفرية، ندرس ثلاثة عوامل مهمة يمكن أن تؤثر على جودة التمثيلات المستفادة: محاولات البيانات، آلية الاهتمام، وظائف الخسائر.من خلال نماذج الاحتياطية على ست مجموعات بيانات، نلاحظ أن ضوضاء البيانات وتشابه لغة له مهمتنا المصب لدينا هي مؤشرات مهمة لأداء النموذج.من خلال التحليل المعماري، نتعلم أن النماذج ذات آلية اهتمام متعددة الوسائط يمكن أن تفوق النماذج العميقة مع آليات الاهتمام الخاصة بالطريقة.أخيرا، نظهر أن الخسائر الناجحة للتناقض المستخدمة في أدب التعلم الإشراف على الذات لا تسفر عن مكاسب أداء مماثلة عند استخدامها في محولات متعددة الوسائط.
Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.
References used
https://aclanthology.org/
The Multimodal Transformer showed to be a competitive model for multimodal tasks involving textual, visual and audio signals. However, as more modalities are involved, its late fusion by concatenation starts to have a negative impact on the model's p
Multimodal research has picked up significantly in the space of question answering with the task being extended to visual question answering, charts question answering as well as multimodal input question answering. However, all these explorations pr
Human language encompasses more than just text; it also conveys emotions through tone and gestures. We present a case study of three simple and efficient Transformer-based architectures for predicting sentiment and emotion in multimodal data. The Lat
Abstract This study carries out a systematic intrinsic evaluation of the semantic representations learned by state-of-the-art pre-trained multimodal Transformers. These representations are claimed to be task-agnostic and shown to help on many downstr
In Visual Question Answering (VQA), existing bilinear methods focus on the interaction between images and questions. As a result, the answers are either spliced into the questions or utilized as labels only for classification. On the other hand, tril