ترغب بنشر مسار تعليمي؟ اضغط هنا

Biases in the Facebook News Feed: a Case Study on the Italian Elections

139   0   0.0 ( 0 )
 نشر من قبل Eduardo Martins Hargreaves
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Facebook News Feed personalization algorithm has a significant impact, on a daily basis, on the lifestyle, mood and opinion of millions of Internet users. Nonetheless, the behavior of such algorithms usually lacks transparency, motivating measurements, modeling and analysis in order to understand and improve its properties. In this paper, we propose a reproducible methodology encompassing measurements and an analytical model to capture the visibility of publishers over a News Feed. First, measurements are used to parameterize and to validate the expressive power of the proposed model. Then, we conduct a what-if analysis to assess the visibility bias incurred by the users against a baseline derived from the model. Our results indicate that a significant bias exists and it is more prominent at the top position of the News Feed. In addition, we found that the bias is non-negligible even for users that are deliberately set as neutral with respect to their political views.



قيم البحث

اقرأ أيضاً

The advent of WWW changed the way we can produce and access information. Recent studies showed that users tend to select information that is consistent with their system of beliefs, forming polarized groups of like-minded people around shared narrati ves where dissenting information is ignored. In this environment, users cooperate to frame and reinforce their shared narrative making any attempt at debunking inefficient. Such a configuration occurs even in the consumption of news online, and considering that 63% of users access news directly form social media, one hypothesis is that more polarization allows for further spreading of misinformation. Along this path, we focus on the polarization of users around news outlets on Facebook in different European countries (Italy, France, Spain and Germany). First, we compare the pages posting behavior and the users interacting patterns across countries and observe different posting, liking and commenting rates. Second, we explore the tendency of users to interact with different pages (i.e., selective exposure) and the emergence of polarized communities generated around specific pages. Then, we introduce a new metric -- i.e., polarization rank -- to measure polarization of communities for each country. We find that Italy is the most polarized country, followed by France, Germany and lastly Spain. Finally, we present a variation of the Bounded Confidence Model to simulate the emergence of these communities by considering the users engagement and trust on the news. Our findings suggest that trust in information broadcaster plays a pivotal role against polarization of users online.
The social brain hypothesis fixes to 150 the number of social relationships we are able to maintain. Similar cognitive constraints emerge in several aspects of our daily life, from our mobility up to the way we communicate, and might even affect the way we consume information online. Indeed, despite the unprecedented amount of information we can access online, our attention span still remains limited. Furthermore, recent studies showed the tendency of users to ignore dissenting information but to interact with information adhering to their point of view. In this paper, we quantitatively analyze users attention economy in news consumption on social media by analyzing 14M users interacting with 583 news outlets (pages) on Facebook over a time span of 6 years. In particular, we explore how users distribute their activity across news pages and topics. We find that, independently of their activity, users show the tendency to follow a very limited number of pages. On the other hand, users tend to interact with almost all the topics presented by their favored pages. Finally, we introduce a taxonomy accounting for users behavior to distinguish between patterns of selective exposure and interest. Our findings suggest that segregation of users in echo chambers might be an emerging effect of users activity on social media and that selective exposure -- i.e. the tendency of users to consume information interest coherent with their preferences -- could be a major driver in their consumption patterns.
We present VaccinItaly, a project which monitors Italian online conversations around vaccines, on Twitter and Facebook. We describe the ongoing data collection, which follows the SARS-CoV-2 vaccination campaign roll-out in Italy and we provide public access to the data collected. We show results from a preliminary analysis of the spread of low- and high-credibility news shared alongside vaccine-related conversations on both social media platforms. We also investigate the content of most popular YouTube videos and encounter several cases of harmful and misleading content about vaccines. Finally, we geolocate Twitter users who discuss vaccines and correlate their activity with open data statistics on vaccine uptake. We make up-to-date results available to the public through an interactive online dashboard associated with the project. The goal of our project is to gain further understanding of the interplay between the public discourse on online social media and the dynamics of vaccine uptake in the real world.
The advent of social media changed the way we consume content favoring a disintermediated access and production. This scenario has been matter of critical discussion about its impact on society. Magnified in the case of Arab Spring or heavily critici zed in the Brexit and 2016 U.S. elections. In this work we explore information consumption on Twitter during the last European electoral campaign by analyzing the interaction patterns of official news sources, fake news sources, politicians, people from the showbiz and many others. We extensively explore interactions among different classes of accounts in the months preceding the last European elections, held between 23rd and 26th of May, 2019. We collected almost 400,000 tweets posted by 863 accounts having different roles in the public society. Through a thorough quantitative analysis we investigate the information flow among them, also exploiting geolocalized information. Accounts show the tendency to confine their interaction within the same class and the debate rarely crosses national borders. Moreover, we do not find any evidence of an organized network of accounts aimed at spreading disinformation. Instead, disinformation outlets are largely ignored by the other actors and hence play a peripheral role in online political discussions.
Targeted advertising is meant to improve the efficiency of matching advertisers to their customers. However, targeted advertising can also be abused by malicious advertisers to efficiently reach people susceptible to false stories, stoke grievances, and incite social conflict. Since targeted ads are not seen by non-targeted and non-vulnerable people, malicious ads are likely to go unreported and their effects undetected. This work examines a specific case of malicious advertising, exploring the extent to which political ads from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebooks targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable sub-populations. In particular, we do the following: (a) We conduct U.S. census-representative surveys to characterize how users with different political ideologies report, approve, and perceive truth in the content of the IRA ads. Our surveys show that many ads are divisive: they elicit very different reactions from people belonging to different socially salient groups. (b) We characterize how these divisive ads are targeted to sub-populations that feel particularly aggrieved by the status quo. Our findings support existing calls for greater transparency of content and targeting of political ads. (c) We particularly focus on how the Facebook ad API facilitates such targeting. We show how the enormous amount of personal data Facebook aggregates about users and makes available to advertisers enables such malicious targeting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا