في التحقق الآلي المطالبة، نسترجع الأدلة من قاعدة المعرفة لتحديد صحة المطالبة.بشكل حدسي، يلعب استرجاع الأدلة الصحيحة دورا حاسما في هذه العملية.في كثير من الأحيان، يتم تناول اختيار الأدلة بمثابة مهمة تصنيف جملة الزوجية، أي نحن ندرب نموذجا للتنبؤ بكل جملة على حدة ما إذا كان دليلا على المطالبة.في هذا العمل، نحن نغلق محولات مستوى المستندات لاستخراج جميع الأدلة من وثيقة ويكيبيديا في وقت واحد.نظل أن هذا النهج ينفذ أفضل من الأحكام المصنفة للنموذج القابل للمقارنة بشكل فردي على جميع مقاييس اختيار الأدلة ذات الصلة في الحمى.ينتج بناء خط أنابيبنا الكامل على إجراء اختيار الأدلة هذا نتيجة جديدة للحمى، وهو معيار التحقق من المطالبات الشعبية.
In Automated Claim Verification, we retrieve evidence from a knowledge base to determine the veracity of a claim. Intuitively, the retrieval of the correct evidence plays a crucial role in this process. Often, evidence selection is tackled as a pairwise sentence classification task, i.e., we train a model to predict for each sentence individually whether it is evidence for a claim. In this work, we fine-tune document level transformers to extract all evidence from a Wikipedia document at once. We show that this approach performs better than a comparable model classifying sentences individually on all relevant evidence selection metrics in FEVER. Our complete pipeline building on this evidence selection procedure produces a new state-of-the-art result on FEVER, a popular claim verification benchmark.
References used
https://aclanthology.org/
Fact Extraction and VERification (FEVER) is a recently introduced task that consists of the following subtasks (i) document retrieval, (ii) sentence retrieval, and (iii) claim verification. In this work, we focus on the subtask of sentence retrieval.
Existing pre-trained language models (PLMs) are often computationally expensive in inference, making them impractical in various resource-limited real-world applications. To address this issue, we propose a dynamic token reduction approach to acceler
A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity o
The task of Event Detection (ED) in Information Extraction aims to recognize and classify trigger words of events in text. The recent progress has featured advanced transformer-based language models (e.g., BERT) as a critical component in state-of-th
In this paper, we describe our system used for SemEval 2021 Task 5: Toxic Spans Detection. Our proposed system approaches the problem as a token classification task. We trained our model to find toxic words and concatenate their spans to predict the