ترغب بنشر مسار تعليمي؟ اضغط هنا

Political Bias and Factualness in News Sharing across more than 100,000 Online Communities

70   0   0.0 ( 0 )
 نشر من قبل Galen Weld
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As civil discourse increasingly takes place online, misinformation and the polarization of news shared in online communities have become ever more relevant concerns with real world harms across our society. Studying online news sharing at scale is challenging due to the massive volume of content which is shared by millions of users across thousands of communities. Therefore, existing research has largely focused on specific communities or specific interventions, such as bans. However, understanding the prevalence and spread of misinformation and polarization more broadly, across thousands of online communities, is critical for the development of governance strategies, interventions, and community design. Here, we conduct the largest study of news sharing on reddit to date, analyzing more than 550 million links spanning 4 years. We use non-partisan news source ratings from Media Bias/Fact Check to annotate links to news sources with their political bias and factualness. We find that, compared to left-leaning communities, right-leaning communities have 105% more variance in the political bias of their news sources, and more links to relatively-more biased sources, on average. We observe that reddit users voting and re-sharing behaviors generally decrease the visibility of extremely biased and low factual content, which receives 20% fewer upvotes and 30% fewer exposures from crossposts than more neutral or more factual content. This suggests that reddit is more resilient to low factual content than Twitter. We show that extremely biased and low factual content is very concentrated, with 99% of such content being shared in only 0.5% of communities, giving credence to the recent strategy of community-wide bans and quarantines.



قيم البحث

اقرأ أيضاً

Political polarization appears to be on the rise, as measured by voting behavior, general affect towards opposing partisans and their parties, and contents posted and consumed online. Research over the years has focused on the role of the Web as a dr iver of polarization. In order to further our understanding of the factors behind online polarization, in the present work we collect and analyze Web browsing histories of tens of thousands of users alongside careful measurements of the time spent browsing various news sources. We show that online news consumption follows a polarized pattern, where users visits to news sources aligned with their own political leaning are substantially longer than their visits to other news sources. Next, we show that such preferences hold at the individual as well as the population level, as evidenced by the emergence of clear partisan communities of news domains from aggregated browsing patterns. Finally, we tackle the important question of the role of user choices in polarization. Are users simply following the links proffered by their Web environment, or do they exacerbate partisan polarization by intentionally pursuing like-minded news sources? To answer this question, we compare browsing patterns with the underlying hyperlink structure spanned by the considered news domains, finding strong evidence of polarization in partisan browsing habits beyond that which can be explained by the hyperlink structure of the Web.
Media organizations bear great reponsibility because of their considerable influence on shaping beliefs and positions of our society. Any form of media can contain overly biased content, e.g., by reporting on political events in a selective or incomp lete manner. A relevant question hence is whether and how such form of imbalanced news coverage can be exposed. The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically. In this regard we utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment. By analyzing this model on article excerpts, we find insightful bias patterns at different levels of text granularity, from single words to the whole article discourse.
News is a central source of information for individuals to inform themselves on current topics. Knowing a news articles slant and authenticity is of crucial importance in times of fake news, news bots, and centralization of media ownership. We introd uce Newsalyze, a bias-aware news reader focusing on a subtle, yet powerful form of media bias, named bias by word choice and labeling (WCL). WCL bias can alter the assessment of entities reported in the news, e.g., freedom fighters vs. terrorists. At the core of the analysis is a neural model that uses a news-adapted BERT language model to determine target-dependent sentiment, a high-level effect of WCL bias. While the analysis currently focuses on only this form of bias, the visualizations already reveal patterns of bias when contrasting articles (overview) and in-text instances of bias (article view).
Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Many researchers focus on automatically detecting and identifying medi a bias in the news, but only very few studies exist that systematically analyze how theses biases can be best visualized and communicated. We create three manually annotated datasets and test varying visualization strategies. The results show no strong effects of becoming aware of the bias of the treatment groups compared to the control group, although a visualization of hand-annotated bias communicated bias instances more effectively than a framing visualization. Showing participants an overview page, which opposes different viewpoints on the same topic, does not yield differences in respondents bias perception. Using a multilevel model, we find that perceived journalist bias is significantly related to perceived political extremeness and impartiality of the article.
What we expect from radiology AI algorithms will shape the selection and implementation of AI in the radiologic practice. In this paper I consider prevailing expectations of AI and compare them to expectations that we have of human readers. I observe that the expectations from AI and radiologists are fundamentally different. The expectations of AI are based on a strong and justified mistrust about the way that AI makes decisions. Because AI decisions are not well understood, it is difficult to know how the algorithms will behave in new, unexpected situations. However, this mistrust is not mirrored in our expectations of human readers. Despite well-proven idiosyncrasies and biases in human decision making, we take comfort from the assumption that others make decisions in a way as we do, and we trust our own decision making. Despite poor ability to explain decision making processes in humans, we accept explanations of decisions given by other humans. Because the goal of radiology is the most accurate radiologic interpretation, our expectations of radiologists and AI should be similar, and both should reflect a healthy mistrust of complicated and partially opaque decision processes undergoing in computer algorithms and human brains. This is generally not the case now.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا