تقتصر مقاييس تقييم سؤال الفيديو (VIDQA) على إجابة كلمة واحدة أو اختيار عبارة من مجموعة ثابتة من العبارات.هذه المقاييس تحد من سيناريو تطبيق نماذج VIDQA.في هذا العمل، نستفيد الأدوار الدلالية المستمدة من أوصاف الفيديو لإخفاء عبارات معينة، لإدخال VIDQAP الذي يطرح VIDQA كامرأة تعبئة العبارة.لتمكين تقييم الجمل الإجابة، نحسب التحسين النسبي للإجابة المتوقعة مقارنة بسلسلة فارغة.لتقليل تأثير التحيز اللغوي في مجموعات بيانات VIDQA، نسترجع شريط فيديو له إجابة مختلفة لنفس السؤال.لتسهيل البحث، نقوم ببناء AttactNet-SRL-QA و Charads-SRL-QA ومقاييسهم عن طريق تمديد ثلاث نماذج لغة رؤية.نحن نقوم بإجراء تحليل مكثف ودراسات ablative لتوجيه العمل في المستقبل.الرمز والبيانات عامة.
Video Question Answering (VidQA) evaluation metrics have been limited to a single-word answer or selecting a phrase from a fixed set of phrases. These metrics limit the VidQA models' application scenario. In this work, we leverage semantic roles derived from video descriptions to mask out certain phrases, to introduce VidQAP which poses VidQA as a fill-in-the-phrase task. To enable evaluation of answer phrases, we compute the relative improvement of the predicted answer compared to an empty string. To reduce the influence of language bias in VidQA datasets, we retrieve a video having a different answer for the same question. To facilitate research, we construct ActivityNet-SRL-QA and Charades-SRL-QA and benchmark them by extending three vision-language models. We perform extensive analysis and ablative studies to guide future work. Code and data are public.
References used
https://aclanthology.org/
Current textual question answering (QA) models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns, so they fail to generalize to out-of-distribution settings. To make a more robust and understandable
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM). We propose
Most of the existing Knowledge-based Question Answering (KBQA) methods first learn to map the given question to a query graph, and then convert the graph to an executable query to find the answer. The query graph is typically expanded progressively f
The evaluation of question answering models compares ground-truth annotations with model predictions. However, as of today, this comparison is mostly lexical-based and therefore misses out on answers that have no lexical overlap but are still semanti
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a