ترغب بنشر مسار تعليمي؟ اضغط هنا

Weak Supervision for Fake News Detection via Reinforcement Learning

166   0   0.0 ( 0 )
 نشر من قبل Yaqing Wang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Today social media has become the primary source for news. Via social media platforms, fake news travel at unprecedented speeds, reach global audiences and put users and communities at great risk. Therefore, it is extremely important to detect fake news as early as possible. Recently, deep learning based approaches have shown improved performance in fake news detection. However, the training of such models requires a large amount of labeled data, but manual annotation is time-consuming and expensive. Moreover, due to the dynamic nature of news, annotated samples may become outdated quickly and cannot represent the news articles on newly emerged events. Therefore, how to obtain fresh and high-quality labeled samples is the major challenge in employing deep learning models for fake news detection. In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i.e., WeFEND, which can leverage users reports as weak supervision to enlarge the amount of training data for fake news detection. The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector. The annotator can automatically assign weak labels for unlabeled news based on users reports. The reinforced selector using reinforcement learning techniques chooses high-quality samples from the weakly labeled data and filters out those low-quality ones that may degrade the detectors prediction performance. The fake news detector aims to identify fake news based on the news content. We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports. Extensive experiments on this dataset show that the proposed WeFEND model achieves the best performance compared with the state-of-the-art methods.



قيم البحث

اقرأ أيضاً

143 - Bo Ni , Zhichun Guo , Jianing Li 2020
Recently, due to the booming influence of online social networks, detecting fake news is drawing significant attention from both academic communities and general public. In this paper, we consider the existence of confounding variables in the feature s of fake news and use Propensity Score Matching (PSM) to select generalizable features in order to reduce the effects of the confounding variables. Experimental results show that the generalizability of fake news method is significantly better by using PSM than using raw frequency to select features. We investigate multiple types of fake news methods (classifiers) such as logistic regression, random forests, and support vector machines. We have consistent observations of performance improvement.
94 - Hao Liao , Qixin Liu , Kai Shu 2020
Disinformation has long been regarded as a severe social problem, where fake news is one of the most representative issues. What is worse, todays highly developed social media makes fake news widely spread at incredible speed, bringing in substantial harm to various aspects of human life. Yet, the popularity of social media also provides opportunities to better detect fake news. Unlike conventional means which merely focus on either content or user comments, effective collaboration of heterogeneous social media information, including content and context factors of news, users comments and the engagement of social media with users, will hopefully give rise to better detection of fake news. Motivated by the above observations, a novel detection framework, namely graph comment-user advanced learning framework (GCAL) is proposed in this paper. User-comment information is crucial but not well studied in fake news detection. Thus, we model user-comment context through network representation learning based on heterogeneous graph neural network. We conduct experiments on two real-world datasets, which demonstrate that the proposed joint model outperforms 8 state-of-the-art baseline methods for fake news detection (at least 4% in Accuracy, 7% in Recall and 5% in F1). Moreover, the proposed method is also explainable.
Fake news travels at unprecedented speeds, reaches global audiences and puts users and communities at great risk via social media platforms. Deep learning based models show good performance when trained on large amounts of labeled data on events of i nterest, whereas the performance of models tends to degrade on other events due to domain shift. Therefore, significant challenges are posed for existing detection approaches to detect fake news on emergent events, where large-scale labeled datasets are difficult to obtain. Moreover, adding the knowledge from newly emergent events requires to build a new model from scratch or continue to fine-tune the model, which can be challenging, expensive, and unrealistic for real-world settings. In order to address those challenges, we propose an end-to-end fake news detection framework named MetaFEND, which is able to learn quickly to detect fake news on emergent events with a few verified posts. Specifically, the proposed model integrates meta-learning and neural process methods together to enjoy the benefits of these approaches. In particular, a label embedding module and a hard attention mechanism are proposed to enhance the effectiveness by handling categorical information and trimming irrelevant posts. Extensive experiments are conducted on multimedia datasets collected from Twitter and Weibo. The experimental results show our proposed MetaFEND model can detect fake news on never-seen events effectively and outperform the state-of-the-art methods.
93 - Yi Han , Amila Silva , Ling Luo 2021
Recent years have witnessed the significant damage caused by various types of fake news. Although considerable effort has been applied to address this issue and much progress has been made on detecting fake news, most existing approaches mainly rely on the textual content and/or social context, while knowledge-level information---entities extracted from the news content and the relations between them---is much less explored. Within the limited work on knowledge-based fake news detection, an external knowledge graph is often required, which may introduce additional problems: it is quite common for entities and relations, especially with respect to new concepts, to be missing in existing knowledge graphs, and both entity prediction and link prediction are open research questions themselves. Therefore, in this work, we investigate textbf{knowledge-based fake news detection that does not require any external knowledge graph.} Specifically, our contributions include: (1) transforming the problem of detecting fake news into a subgraph classification task---entities and relations are extracted from each news item to form a single knowledge graph, where a news item is represented by a subgraph. Then a graph neural network (GNN) model is trained to classify each subgraph/news item. (2) Further improving the performance of this model through a simple but effective multi-modal technique that combines extracted knowledge, textual content and social context. Experiments on multiple datasets with thousands of labelled news items demonstrate that our knowledge-based algorithm outperforms existing counterpart methods, and its performance can be further boosted by the multi-modal approach.
Although significant effort has been applied to fact-checking, the prevalence of fake news over social media, which has profound impact on justice, public trust and our society, remains a serious problem. In this work, we focus on propagation-based f ake news detection, as recent studies have demonstrated that fake news and real news spread differently online. Specifically, considering the capability of graph neural networks (GNNs) in dealing with non-Euclidean data, we use GNNs to differentiate between the propagation patterns of fake and real news on social media. In particular, we concentrate on two questions: (1) Without relying on any text information, e.g., tweet content, replies and user descriptions, how accurately can GNNs identify fake news? Machine learning models are known to be vulnerable to adversarial attacks, and avoiding the dependence on text-based features can make the model less susceptible to the manipulation of advanced fake news fabricators. (2) How to deal with new, unseen data? In other words, how does a GNN trained on a given dataset perform on a new and potentially vastly different dataset? If it achieves unsatisfactory performance, how do we solve the problem without re-training the model on the entire data from scratch? We study the above questions on two datasets with thousands of labelled news items, and our results show that: (1) GNNs can achieve comparable or superior performance without any text information to state-of-the-art methods. (2) GNNs trained on a given dataset may perform poorly on new, unseen data, and direct incremental training cannot solve the problem---this issue has not been addressed in the previous work that applies GNNs for fake news detection. In order to solve the problem, we propose a method that achieves balanced performance on both existing and new datasets, by using techniques from continual learning to train GNNs incrementally.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا