No Arabic abstract
The current public sense of anxiety in dealing with disinformation as manifested by so-called fake news is acutely displayed by the reaction to recent events prompted by a belief in conspiracies among certain groups. A model to deal with disinformation is proposed; it is based on a demonstration of the analogous behavior of disinformation to that of wave phenomena. Two criteria form the basis to combat the deleterious effects of disinformation: the use of a refractive medium based on skepticism as the default mode, and polarization as a filter mechanism to analyze its merits based on evidence. Critical thinking is enhanced since the first one tackles the pernicious effect of the confirmation bias, and the second the tendency towards attribution, both of which undermine our efforts to think and act rationally. The benefits of such a strategy include an epistemic reformulation of disinformation as an independently existing phenomenon, that removes its negative connotations when perceived as being possessed by groups or individuals.
Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news.
Disinformation through fake news is an ongoing problem in our society and has become easily spread through social media. The most cost and time effective way to filter these large amounts of data is to use a combination of human and technical interventions to identify it. From a technical perspective, Natural Language Processing (NLP) is widely used in detecting fake news. Social media companies use NLP techniques to identify the fake news and warn their users, but fake news may still slip through undetected. It is especially a problem in more localised contexts (outside the United States of America). How do we adjust fake news detection systems to work better for local contexts such as in South Africa. In this work we investigate fake news detection on South African websites. We curate a dataset of South African fake news and then train detection models. We contrast this with using widely available fake news datasets (from mostly USA website). We also explore making the datasets more diverse by combining them and observe the differences in behaviour in writing between nations fake news using interpretable machine learning.
Automatically identifying fake news from the Internet is a challenging problem in deception detection tasks. Online news is modified constantly during its propagation, e.g., malicious users distort the original truth and make up fake news. However, the continuous evolution process would generate unprecedented fake news and cheat the original model. We present the Fake News Evolution (FNE) dataset: a new dataset tracking the fake news evolution process. Our dataset is composed of 950 paired data, each of which consists of articles representing the three significant phases of the evolution process, which are the truth, the fake news, and the evolved fake news. We observe the features during the evolution and they are the disinformation techniques, text similarity, top 10 keywords, classification accuracy, parts of speech, and sentiment properties.
At the latest since the advent of the Internet, disinformation and conspiracy theories have become ubiquitous. Recent examples like QAnon and Pizzagate prove that false information can lead to real violence. In this motivation statement for the Workshop on Human Aspects of Misinformation at CHI 2021, I explain my research agenda focused on 1. why people believe in disinformation, 2. how people can be best supported in recognizing disinformation, and 3. what the potentials and risks of different tools designed to fight disinformation are.
This is a paper for exploring various different models aiming at developing fake news detection models and we had used certain machine learning algorithms and we had used pretrained algorithms such as TFIDF and CV and W2V as features for processing textual data.