Do you want to publish a course? Click here

Controlling Fake News by Tagging: A Branching Process Analysis

45   0   0.0 ( 0 )
 Added by Khushboo Agarwal
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The spread of fake news, especially on online social networks, has become a matter of concern in the last few years. These platforms are also used for propagating other important authentic information. Thus, there is a need for mitigating fake news without significantly influencing the spread of real news. We leverage users inherent capabilities of identifying fake news and propose a warning-based control mechanism to curb this spread. Warnings are based on previous users responses that indicate the authenticity of the news. We use population-size dependent continuous-time multi-type branching processes to describe the spreading under the warning mechanism. We also have new results towards these branching processes. The (time) asymptotic proportions of the individual populations are derived. These results are instrumental in deriving relevant type-1, type-2 performance measures, and formulating an optimization problem to design optimal warning parameters. The fraction of copies tagged as real (fake) are considered for the type-1 (type-2) performance. We derive structural properties of the performance, which help simplify the optimization problem. We finally demonstrate that the optimal warning mechanism effectively mitigates fake news, with negligible influences on the propagation of authentic news. We validate performance measures using Monte Carlo simulations on ego-network database related to Twitter.



rate research

Read More

93 - Yi Han , Amila Silva , Ling Luo 2021
Recent years have witnessed the significant damage caused by various types of fake news. Although considerable effort has been applied to address this issue and much progress has been made on detecting fake news, most existing approaches mainly rely on the textual content and/or social context, while knowledge-level information---entities extracted from the news content and the relations between them---is much less explored. Within the limited work on knowledge-based fake news detection, an external knowledge graph is often required, which may introduce additional problems: it is quite common for entities and relations, especially with respect to new concepts, to be missing in existing knowledge graphs, and both entity prediction and link prediction are open research questions themselves. Therefore, in this work, we investigate textbf{knowledge-based fake news detection that does not require any external knowledge graph.} Specifically, our contributions include: (1) transforming the problem of detecting fake news into a subgraph classification task---entities and relations are extracted from each news item to form a single knowledge graph, where a news item is represented by a subgraph. Then a graph neural network (GNN) model is trained to classify each subgraph/news item. (2) Further improving the performance of this model through a simple but effective multi-modal technique that combines extracted knowledge, textual content and social context. Experiments on multiple datasets with thousands of labelled news items demonstrate that our knowledge-based algorithm outperforms existing counterpart methods, and its performance can be further boosted by the multi-modal approach.
COVID-19 has impacted all lives. To maintain social distancing and avoiding exposure, works and lives have gradually moved online. Under this trend, social media usage to obtain COVID-19 news has increased. Also, misinformation on COVID-19 is frequently spread on social media. In this work, we develop CHECKED, the first Chinese dataset on COVID-19 misinformation. CHECKED provides a total 2,104 verified microblogs related to COVID-19 from December 2019 to August 2020, identified by using a specific list of keywords. Correspondingly, CHECKED includes 1,868,175 reposts, 1,185,702 comments, and 56,852,736 likes that reveal how these verified microblogs are spread and reacted on Weibo. The dataset contains a rich set of multimedia information for each microblog including ground-truth label, textual, visual, temporal, and network information. Extensive experiments have been conducted to analyze CHECKED data and to provide benchmark results for well-established methods when predicting fake news using CHECKED. We hope that CHECKED can facilitate studies that target misinformation on coronavirus. The dataset is available at https://github.com/cyang03/CHECKED.
94 - Hao Liao , Qixin Liu , Kai Shu 2020
Disinformation has long been regarded as a severe social problem, where fake news is one of the most representative issues. What is worse, todays highly developed social media makes fake news widely spread at incredible speed, bringing in substantial harm to various aspects of human life. Yet, the popularity of social media also provides opportunities to better detect fake news. Unlike conventional means which merely focus on either content or user comments, effective collaboration of heterogeneous social media information, including content and context factors of news, users comments and the engagement of social media with users, will hopefully give rise to better detection of fake news. Motivated by the above observations, a novel detection framework, namely graph comment-user advanced learning framework (GCAL) is proposed in this paper. User-comment information is crucial but not well studied in fake news detection. Thus, we model user-comment context through network representation learning based on heterogeneous graph neural network. We conduct experiments on two real-world datasets, which demonstrate that the proposed joint model outperforms 8 state-of-the-art baseline methods for fake news detection (at least 4% in Accuracy, 7% in Recall and 5% in F1). Moreover, the proposed method is also explainable.
Todays social media platforms enable to spread both authentic and fake news very quickly. Some approaches have been proposed to automatically detect such fake news based on their content, but it is difficult to agree on universal criteria of authenticity (which can be bypassed by adversaries once known). Besides, it is obviously impossible to have each news item checked by a human. In this paper, we a mechanism to limit the spread of fake news which is not based on content. It can be implemented as a plugin on a social media platform. The principle is as follows: a team of fact-checkers reviews a small number of news items (the most popular ones), which enables to have an estimation of each users inclination to share fake news items. Then, using a Bayesian approach, we estimate the trustworthiness of future news items, and treat accordingly those of them that pass a certain untrustworthiness threshold. We then evaluate the effectiveness and overhead of this technique on a large Twitter graph. We show that having a few thousands users exposed to one given news item enables to reach a very precise estimation of its reliability. We thus identify more than 99% of fake news items with no false positives. The performance impact is very small: the induced overhead on the 90th percentile latency is less than 3%, and less than 8% on the throughput of user operations.
Today social media has become the primary source for news. Via social media platforms, fake news travel at unprecedented speeds, reach global audiences and put users and communities at great risk. Therefore, it is extremely important to detect fake news as early as possible. Recently, deep learning based approaches have shown improved performance in fake news detection. However, the training of such models requires a large amount of labeled data, but manual annotation is time-consuming and expensive. Moreover, due to the dynamic nature of news, annotated samples may become outdated quickly and cannot represent the news articles on newly emerged events. Therefore, how to obtain fresh and high-quality labeled samples is the major challenge in employing deep learning models for fake news detection. In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i.e., WeFEND, which can leverage users reports as weak supervision to enlarge the amount of training data for fake news detection. The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector. The annotator can automatically assign weak labels for unlabeled news based on users reports. The reinforced selector using reinforcement learning techniques chooses high-quality samples from the weakly labeled data and filters out those low-quality ones that may degrade the detectors prediction performance. The fake news detector aims to identify fake news based on the news content. We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports. Extensive experiments on this dataset show that the proposed WeFEND model achieves the best performance compared with the state-of-the-art methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا