ﻻ يوجد ملخص باللغة العربية
Given a natural language statement, how to verify whether it is supported, refuted, or unknown according to a large-scale knowledge source like Wikipedia? Existing neural-network-based methods often regard a sentence as a whole. While we argue that it is beneficial to decompose a statement into multiple verifiable logical points. In this paper, we propose LOREN, a novel approach for fact verification that integrates both Logic guided Reasoning and Neural inference. The key insight of LOREN is that it decomposes a statement into multiple reasoning units around the central phrases. Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task and calculates the confidence of every single hypothesis using neural networks in the embedding space. They are aggregated to make a final prediction using a neural joint reasoner guided by a set of three-valued logic rules. LOREN enjoys the additional merit of interpretability -- it is easy to explain how it reaches certain results with intermediate results and why it makes mistakes. We evaluate LOREN on FEVER, a public benchmark for fact verification. Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
Fact verification is a challenging task that requires simultaneously reasoning and aggregating over multiple retrieved pieces of evidence to evaluate the truthfulness of a claim. Existing approaches typically (i) explore the semantic interaction betw
Logical reasoning, which is closely related to human cognition, is of vital importance in humans understanding of texts. Recent years have witnessed increasing attentions on machines logical reasoning abilities. However, previous studies commonly app
The increasing concern with misinformation has stimulated research efforts on automatic fact checking. The recently-released FEVER dataset introduced a benchmark fact-verification task in which a system is asked to verify a claim using evidential sen
Recent years have witnessed the success of deep neural networks in many research areas. The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of
Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic claims that would occur in the real world. Existing claims are either authored by c