ترغب بنشر مسار تعليمي؟ اضغط هنا

Probabilistic Social Learning Improves the Publics Detection of Misinformation

60   0   0.0 ( 0 )
 نشر من قبل Douglas Guilbeault R
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The digital spread of misinformation is one of the leading threats to democracy, public health, and the global economy. Popular strategies for mitigating misinformation include crowdsourcing, machine learning, and media literacy programs that require social media users to classify news in binary terms as either true or false. However, research on peer influence suggests that framing decisions in binary terms can amplify judgment errors and limit social learning, whereas framing decisions in probabilistic terms can reliably improve judgments. In this preregistered experiment, we compare online peer networks that collaboratively evaluate the veracity of news by communicating either binary or probabilistic judgments. Exchanging probabilistic estimates of news veracity substantially improved individual and group judgments, with the effect of eliminating polarization in news evaluation. By contrast, exchanging binary classifications reduced social learning and entrenched polarization. The benefits of probabilistic social learning are robust to participants education, gender, race, income, religion, and partisanship.

قيم البحث

اقرأ أيضاً

The ongoing Coronavirus (COVID-19) pandemic highlights the inter-connectedness of our present-day globalized world. With social distancing policies in place, virtual communication has become an important source of (mis)information. As increasing numb er of people rely on social media platforms for news, identifying misinformation and uncovering the nature of online discourse around COVID-19 has emerged as a critical task. To this end, we collected streaming data related to COVID-19 using the Twitter API, starting March 1, 2020. We identified unreliable and misleading contents based on fact-checking sources, and examined the narratives promoted in misinformation tweets, along with the distribution of engagements with these tweets. In addition, we provide examples of the spreading patterns of prominent misinformation tweets. The analysis is presented and updated on a publically accessible dashboard (https://usc-melady.github.io/COVID-19-Tweet-Analysis) to track the nature of online discourse and misinformation about COVID-19 on Twitter from March 1 - June 5, 2020. The dashboard provides a daily list of identified misinformation tweets, along with topics, sentiments, and emerging trends in the COVID-19 Twitter discourse. The dashboard is provided to improve visibility into the nature and quality of information shared online, and provide real-time access to insights and information extracted from the dataset.
The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disr uption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of $4,709$ intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information.
Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of serial haters, intended as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of number of comments and time. Our results show that, coherently with Godwins law, online debates tend to degenerate towards increasingly toxic exchanges of views.
The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in todays social media context, where limited attention and technology constrain the expressive power of humans, w hile incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.
An important challenge in the process of tracking and detecting the dissemination of misinformation is to understand the political gap between people that engage with the so called fake news. A possible factor responsible for this gap is opinion pola rization, which may prompt the general public to classify content that they disagree or want to discredit as fake. In this work, we study the relationship between political polarization and content reported by Twitter users as related to fake news. We investigate how polarization may create distinct narratives on what misinformation actually is. We perform our study based on two datasets collected from Twitter. The first dataset contains tweets about US politics in general, from which we compute the degree of polarization of each user towards the Republican and Democratic Party. In the second dataset, we collect tweets and URLs that co-occurred with fake news related keywords and hashtags, such as #FakeNews and #AlternativeFact, as well as reactions towards such tweets and URLs. We then analyze the relationship between polarization and what is perceived as misinformation, and whether users are designating information that they disagree as fake. Our results show an increase in the polarization of users and URLs associated with fake-news keywords and hashtags, when compared to information not labeled as fake news. We discuss the impact of our findings on the challenges of tracking fake news in the ongoing battle against misinformation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا