مجردة، محاكمة واسعة النطاق واسعة النطاق، هو المنهجية القياسية للعديد من المهام في رؤية الكمبيوتر ومعالجة اللغات الطبيعية.في الآونة الأخيرة، تم اقتراح العديد من الطرق للحصول على رؤوس الرؤية واللغة لمعالجة التحديات عند تقاطع هذين المجالات الرئيسية في منظمة العفو الدولية.يمكن تصنيف هذه النماذج في تشفير دفق واحد أو دفق مزدوج.نحن ندرس الاختلافات بين هاتين الفئتين، وإظهار كيف يمكن موحد بموجب إطار نظري واحد.ثم نقوم بإجراء تجارب مراقبة لتمييز الاختلافات التجريبية بين خمسة الرؤية والصغيرة.تظهر تجاربنا أن البيانات التدريبية والضغط هي المسؤولة عن معظم الاختلافات بين النتائج المبلغ عنها، لكنها تكشف أيضا أن طبقة التضمين تلعب دورا حاسما في هذه النماذج الضخمة.
Abstract Large-scale pretraining and task-specific fine- tuning is now the standard methodology for many tasks in computer vision and natural language processing. Recently, a multitude of methods have been proposed for pretraining vision and language BERTs to tackle challenges at the intersection of these two key areas of AI. These models can be categorized into either single-stream or dual-stream encoders. We study the differences between these two categories, and show how they can be unified under a single theoretical framework. We then conduct controlled experiments to discern the empirical differences between five vision and language BERTs. Our experiments show that training data and hyperparameters are responsible for most of the differences between the reported results, but they also reveal that the embedding layer plays a crucial role in these massive models.
References used
https://aclanthology.org/
Linguistic representations derived from text alone have been criticized for their lack of grounding, i.e., connecting words to their meanings in the physical world. Vision-and- Language (VL) models, trained jointly on text and image or video data, ha
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings. Under a zero-s
Multimodal research has picked up significantly in the space of question answering with the task being extended to visual question answering, charts question answering as well as multimodal input question answering. However, all these explorations pr
Multimodal sentiment analysis (MSA) draws increasing attention with the availability of multimodal data. The boost in performance of MSA models is mainly hindered by two problems. On the one hand, recent MSA works mostly focus on learning cross-modal
Internet memes have become powerful means to transmit political, psychological, and socio-cultural ideas. Although memes are typically humorous, recent days have witnessed an escalation of harmful memes used for trolling, cyberbullying, and abuse. De