التحقق من الحقائق الآلية على نطاق واسع هو مهمة صعبة لم تتم دراستها بشكل منهجي حتى وقت قريب.مجموعات وثيقة صاخبة كبيرة مثل الويب أو المقالات الإخبارية تجعل المهمة أكثر صعوبة.نحن تصف نظام فحص الحقائق الآلي من ثلاث مراحل، اسمه Quin +، باستخدام أساليب استرجاع الأدلة والاختيار.نحن نوضح أن استخدام تمثيلات مرور كثيفة يؤدي إلى أدلة أعلى بكثير استدعاء في بيئة صاخبة.نقترح أيضا أساليب اختيار الجملة، وهي اختيار مقرها التضمين باستخدام نموذج استرجاع كثيف، ونهج وضع العلامات المتسلسل لتحديد السياق.QUIN + قادر على التحقق من مطالبات المجال المفتوح باستخدام النتائج من محركات البحث على الويب.
Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. We describe a three-stage automated fact-checking system, named Quin+, using evidence retrieval and selection methods. We demonstrate that using dense passage representations leads to much higher evidence recall in a noisy setting. We also propose two sentence selection approaches, an embedding-based selection using a dense retrieval model, and a sequence labeling approach for context-aware selection. Quin+ is able to verify open-domain claims using results from web search engines.
References used
https://aclanthology.org/
Fact Extraction and VERification (FEVER) is a recently introduced task that consists of the following subtasks (i) document retrieval, (ii) sentence retrieval, and (iii) claim verification. In this work, we focus on the subtask of sentence retrieval.
Claim verification is challenging because it requires first to find textual evidence and then apply claim-evidence entailment to verify a claim. Previous works evaluate the entailment step based on the retrieved evidence, whereas we hypothesize that
The task of verifying the truthfulness of claims in textual documents, or fact-checking, has received significant attention in recent years. Many existing evidence-based factchecking datasets contain synthetic claims and the models trained on these d
In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications. We experiment with two methods: (1) an extractive method based on Biased TextRank
Table-based fact verification task aims to verify whether the given statement is supported by the given semi-structured table. Symbolic reasoning with logical operations plays a crucial role in this task. Existing methods leverage programs that conta