Do you want to publish a course? Click here

Conspiracy in the Time of Corona: Automatic detection of Covid-19 Conspiracy Theories in Social Media and the News

91   0   0.0 ( 0 )
 Added by Shadi Shahsavari
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Rumors and conspiracy theories thrive in environments of low confidence and low trust. Consequently, it is not surprising that ones related to the Covid-19 pandemic are proliferating given the lack of any authoritative scientific consensus on the virus, its spread and containment, or on the long term social and economic ramifications of the pandemic. Among the stories currently circulating are ones suggesting that the 5G network activates the virus, that the pandemic is a hoax perpetrated by a global cabal, that the virus is a bio-weapon released deliberately by the Chinese, or that Bill Gates is using it as cover to launch a global surveillance regime. While some may be quick to dismiss these stories as having little impact on real-world behavior, recent events including the destruction of property, racially fueled attacks against Asian Americans, and demonstrations espousing resistance to public health orders countermand such conclusions. Inspired by narrative theory, we crawl social media sites and news reports and, through the application of automated machine-learning methods, discover the underlying narrative frameworks supporting the generation of these stories. We show how the various narrative frameworks fueling rumors and conspiracy theories rely on the alignment of otherwise disparate domains of knowledge, and consider how they attach to the broader reporting on the pandemic. These alignments and attachments, which can be monitored in near real-time, may be useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Understanding the dynamics of storytelling on social media and the narrative frameworks that provide the generative basis for these stories may also be helpful for devising methods to disrupt their spread.



rate research

Read More

Parents - particularly moms - increasingly consult social media for support when taking decisions about their young children, and likely also when advising other family members such as elderly relatives. Minimizing malignant online influences is therefore crucial to securing their assent for policies ranging from vaccinations, masks and social distancing against the pandemic, to household best practices against climate change, to acceptance of future 5G towers nearby. Here we show how a strengthening of bonds across online communities during the pandemic, has led to non-Covid-19 conspiracy theories (e.g. fluoride, chemtrails, 5G) attaining heightened access to mainstream parent communities. Alternative health communities act as the critical conduits between conspiracy theorists and parents, and make the narratives more palatable to the latter. We demonstrate experimentally that these inter-community bonds can perpetually generate new misinformation, irrespective of any changes in factual information. Our findings show explicitly why Facebooks current policies have failed to stop the mainstreaming of non-Covid-19 and Covid-19 conspiracy theories and misinformation, and why targeting the largest communities will not work. A simple yet exactly solvable and empirically grounded mathematical model, shows how modest tailoring of mainstream communities couplings could prevent them from tipping against establishment guidance. Our conclusions should also apply to other social media platforms and topics.
Although a great deal of attention has been paid to how conspiracy theories circulate on social media and their factual counterpart conspiracies, there has been little computational work done on describing their narrative structures. We present an automated pipeline for the discovery and description of the generative narrative frameworks of conspiracy theories on social media, and actual conspiracies reported in the news media. We base this work on two separate repositories of posts and news articles describing the well-known conspiracy theory Pizzagate from 2016, and the New Jersey conspiracy Bridgegate from 2013. We formulate a graphical generative machine learning model where nodes represent actors/actants, and multi-edges and self-loops among nodes capture context-specific relationships. Posts and news items are viewed as samples of subgraphs of the hidden narrative network. The problem of reconstructing the underlying structure is posed as a latent model estimation problem. We automatically extract and aggregate the actants and their relationships from the posts and articles. We capture context specific actants and interactant relationships by developing a system of supernodes and subnodes. We use these to construct a network, which constitutes the underlying narrative framework. We show how the Pizzagate framework relies on the conspiracy theorists interpretation of hidden knowledge to link otherwise unlinked domains of human interaction, and hypothesize that this multi-domain focus is an important feature of conspiracy theories. While Pizzagate relies on the alignment of multiple domains, Bridgegate remains firmly rooted in the single domain of New Jersey politics. We hypothesize that the narrative framework of a conspiracy theory might stabilize quickly in contrast to the narrative framework of an actual one, which may develop more slowly as revelations come to light.
This paper studies conspiracy and debunking narratives about COVID-19 origination on a major Chinese social media platform, Weibo, from January to April 2020. Popular conspiracies about COVID-19 on Weibo, including that the virus is human-synthesized or a bioweapon, differ substantially from those in the US. They attribute more responsibility to the US than to China, especially following Sino-US confrontations. Compared to conspiracy posts, debunking posts are associated with lower user participation but higher mobilization. Debunking narratives can be more engaging when they come from women and influencers and cite scientists. Our findings suggest that conspiracy narratives can carry highly cultural and political orientations. Correction efforts should consider political motives and identify important stakeholders to reconstruct international dialogues toward intercultural understanding.
The declaration of COVID-19 as a pandemic has largely amplified the spread of related information on social media, such as Twitter, Facebook, and WeChat.Unlike the previous studies which focused on how to detect the misinformation or fake news related toCOVID-19, we investigate how the disease and information co-evolve in the population. We focus onCOVID-19and its information during the period when the disease was widely spread in China, i.e., from January 25th to March 24th, 2020. We first explore how the disease and information co-evolve via the spatial analysis of the two spreading processes. We visualize the geo-location of both disease and information at the province level and find that disease is more geo-localized compared to information. We find a high correlation between the disease and information data, and also people care about the spread only when it comes to their neighborhood. Regard to the content of the information, we find that positive messages are more negatively correlated with the disease compared to negative and neutral messages. Additionally, we introduce machine learning algorithms, i.e., linear regression and random forest, to further predict the number of infected using different disease spatial related and information-related characteristics. We obtain that the disease spatial related characteristics of nearby cities can help to improve the prediction accuracy. Meanwhile, information-related characteristics can also help to improve the prediction performance, but with a delay, i.e., the improvement comes from using, for instance, the number of messages 10 days ago, for disease prediction. The methodology proposed in this paper may shed light on new clues of emerging infections
90 - Natasa Golo 2015
The results of the public opinion poll performed in January 2015, just after the terrorist attack on the French satirical weekly magazine Charlie Hebdo and the kosher supermarket in Paris, when 17 people were killed, showed that a significant number of French citizens held conspiratorial beliefs about it (17 %). This gave reason to an alternative analysis of public opinion, presented in this paper. We collected 990 on-line articles mentioning Charlie Hebdo from Le Monde web site (one of the leading French news agencies), and looked at the ones that contained words related with conspiracy (in French: `complot, `conspiration or `conjuration). Then we analyzed the readers response, performing a semantic analysis of the 16490 comments posted on-line as reaction to the above articles. We identified 2 attempts to launch a conspiratorial rumour. A more recent Le Monde article, which reflects on those early conspiratorial attempts from a rational perspective, and the commentary thereon, showed that the readers have more interest in understanding the possible causes for the onset of conspiratorial beliefs then to delve into the arguments that the conspiracists previously brought up to the public. We discuss the results of the above semantic analysis and give interpretation of the opinion dynamics measured in the data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا