تستخدم أنظمة الرد على السؤال المرئي الحالي (VQA) بشكل شائع الشبكات العصبية الرسم البيانية (GNNS) لاستخراج العلاقات البصرية مثل العلاقات الدلالية أو العلاقات المكانية. ومع ذلك، فإن الدراسات التي تستخدم GNNS تتجاهل عادة أهمية كل علاقة وتسلسل ببساطة النواتج من ترميز العلاقات المتعددة. في هذه الورقة، نقترح هندسة طبقة جديدة تضرب علاقات مرئية متعددة من خلال آلية الاهتمام لمعالجة هذه المسألة. على وجه التحديد، نقوم بتطوير نموذج يستخدم تضمين السؤال ومضمون مشترك للمشفرين للحصول على أوزان الاهتمام الديناميكي فيما يتعلق بنوع الأسئلة. باستخدام الأوزان الاهتمام بالترفيه، يمكن للنموذج المقترح استخدام ميزات العلاقة المرئية اللازمة لسؤال معين. النتائج التجريبية على DataSet VQA 2.0 توضح أن النموذج المقترح تفوق الفنيات القائمة على الرسم البياني القائمة على شبكة الإنترنت. بالإضافة إلى ذلك، نقوم بتصور وزن الاهتمام وإظهار أن النموذج المقترح يعين وزن أعلى للعلاقات الأكثر صلة بالمسألة.
Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations. However, studies that use GNNs typically ignore the importance of each relation and simply concatenate outputs from multiple relation encoders. In this paper, we propose a novel layer architecture that fuses multiple visual relations through an attention mechanism to address this issue. Specifically, we develop a model that uses question embedding and joint embedding of the encoders to obtain dynamic attention weights with regard to the type of questions. Using the learnable attention weights, the proposed model can efficiently use the necessary visual relation features for a given question. Experimental results on the VQA 2.0 dataset demonstrate that the proposed model outperforms existing graph attention network-based architectures. Additionally, we visualize the attention weight and show that the proposed model assigns a higher weight to relations that are more relevant to the question.
References used
https://aclanthology.org/
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective for complex questions, by recursively retrieving new evide
Many datasets have been created for training reading comprehension models, and a natural question is whether we can combine them to build models that (1) perform better on all of the training datasets and (2) generalize and transfer better to new dat
Pre-trained language-vision models have shown remarkable performance on the visual question answering (VQA) task. However, most pre-trained models are trained by only considering monolingual learning, especially the resource-rich language like Englis
Large pre-trained language models (PLMs) have led to great success on various commonsense question answering (QA) tasks in an end-to-end fashion. However, little attention has been paid to what commonsense knowledge is needed to deeply characterize t
In Visual Question Answering (VQA), existing bilinear methods focus on the interaction between images and questions. As a result, the answers are either spliced into the questions or utilized as labels only for classification. On the other hand, tril