أظهر المحول متعدد الوسائط نموذجا تنافسي للمهام متعددة الوسائط التي تنطوي على إشارات نصية ومرئية وصوتية.ومع ذلك، نظرا لأن المزيد من الطرائق متورطة، يبدأ الاندماج المتأخر عن طريق التسلسل في الحصول على تأثير سلبي على أداء النموذج.علاوة على ذلك، تصبح تنبؤات نموذج الترجمة الشفوية صعبة، لأن المرء يجب أن ينظر إلى مصفوفات تنشيط الاهتمام المختلفة.من أجل التغلب على أوجه القصور هذه، نقترح أداء الانصهار المتأخر عن طريق إضافة وحدة نمطية GMU، والتي تتيح بشكل فعال النموذج من طرائق الوزن على مستوى مثيل، مما يحسن أدائه مع توفير آلية تفسيرية أفضل.في التجارب، نقوم بمقارنة نموذجنا المقترح (Mult-Gmu) مقابل التنفيذ الأصلي (Mult-Concat) ونموذج SOTA تم اختباره في مجموعة بيانات تصنيف أنواع الأفلام.نهجنا، Mult-Gmu، تتفوق على حد سواء، Mult-Concat ونموذج Sota السابق.
The Multimodal Transformer showed to be a competitive model for multimodal tasks involving textual, visual and audio signals. However, as more modalities are involved, its late fusion by concatenation starts to have a negative impact on the model's performance. Besides, interpreting model's predictions becomes difficult, as one would have to look at the different attention activation matrices. In order to overcome these shortcomings, we propose to perform late fusion by adding a GMU module, which effectively allows the model to weight modalities at instance level, improving its performance while providing a better interpretabilty mechanism. In the experiments, we compare our proposed model (MulT-GMU) against the original implementation (MulT-Concat) and a SOTA model tested in a movie genre classification dataset. Our approach, MulT-GMU, outperforms both, MulT-Concat and previous SOTA model.
References used
https://aclanthology.org/
Large-scale multi-modal classification aim to distinguish between different multi-modal data, and it has drawn dramatically attentions since last decade. In this paper, we propose a multi-task learning-based framework for the multimodal classificatio
In Visual Question Answering (VQA), existing bilinear methods focus on the interaction between images and questions. As a result, the answers are either spliced into the questions or utilized as labels only for classification. On the other hand, tril
Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing. Here, we explore the use of unstructured external knowledge sources of image
This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, cal
Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important fa