نهج فهم اللغة الحديثة في الرؤية اعتماد محول متعدد الوسائط قبل التدريب المسبق و Finetuning النموذج.يتعلم العمل المسبق تمثيلات الرموز النصية والسمات المرئية مع آليات الانهيارات المتقاطعة ويلتقط المحاذاة على أساس إشارات غير مباشرة.في هذا العمل، نقترح تعزيز آلية المحاذاة من خلال دمج هياكل الرسم البياني المشهد للصورة كجسر بين الطرطرين، والتعلم بأهداف جديدة للتناقض.في دراستنا الأولية حول الاسئلة المرئية التركيبية الصعبة الإجابة على المهمة، نظهر النهج المقترح يحقق نتائج محسنة، مما يدل على الإمكانات لتعزيز فهم لغة الرؤية.
Recent vision-language understanding approaches adopt a multi-modal transformer pre-training and finetuning paradigm. Prior work learns representations of text tokens and visual features with cross-attention mechanisms and captures the alignment solely based on indirect signals. In this work, we propose to enhance the alignment mechanism by incorporating image scene graph structures as the bridge between the two modalities, and learning with new contrastive objectives. In our preliminary study on the challenging compositional visual question answering task, we show the proposed approach achieves improved results, demonstrating potentials to enhance vision-language understanding.
References used
https://aclanthology.org/
The problem of interpretation of knowledge learned by multi-head self-attention in transformers has been one of the central questions in NLP. However, a lot of work mainly focused on models trained for uni-modal tasks, e.g. machine translation. In th
Vision language navigation is the task that requires an agent to navigate through a 3D environment based on natural language instructions. One key challenge in this task is to ground instructions with the current visual information that the agent per
In the Vision-and-Language Navigation (VLN) task an embodied agent navigates a 3D environment, following natural language instructions. A challenge in this task is how to handle off the path' scenarios where an agent veers from a reference path. Prio
Multi-modal machine translation (MMT) aims at improving translation performance by incorporating visual information. Most of the studies leverage the visual information through integrating the global image features as auxiliary input or decoding by a
Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the textual analysis, joint extracting both aspect terms and sentiment polarities has