Do you want to publish a course? Click here

On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook

54   0   0.0 ( 0 )
 Added by Filipe Ribeiro
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Targeted advertising is meant to improve the efficiency of matching advertisers to their customers. However, targeted advertising can also be abused by malicious advertisers to efficiently reach people susceptible to false stories, stoke grievances, and incite social conflict. Since targeted ads are not seen by non-targeted and non-vulnerable people, malicious ads are likely to go unreported and their effects undetected. This work examines a specific case of malicious advertising, exploring the extent to which political ads from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebooks targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable sub-populations. In particular, we do the following: (a) We conduct U.S. census-representative surveys to characterize how users with different political ideologies report, approve, and perceive truth in the content of the IRA ads. Our surveys show that many ads are divisive: they elicit very different reactions from people belonging to different socially salient groups. (b) We characterize how these divisive ads are targeted to sub-populations that feel particularly aggrieved by the status quo. Our findings support existing calls for greater transparency of content and targeting of political ads. (c) We particularly focus on how the Facebook ad API facilitates such targeting. We show how the enormous amount of personal data Facebook aggregates about users and makes available to advertisers enables such malicious targeting.

rate research

Read More

Facebook News Feed personalization algorithm has a significant impact, on a daily basis, on the lifestyle, mood and opinion of millions of Internet users. Nonetheless, the behavior of such algorithms usually lacks transparency, motivating measurements, modeling and analysis in order to understand and improve its properties. In this paper, we propose a reproducible methodology encompassing measurements and an analytical model to capture the visibility of publishers over a News Feed. First, measurements are used to parameterize and to validate the expressive power of the proposed model. Then, we conduct a what-if analysis to assess the visibility bias incurred by the users against a baseline derived from the model. Our results indicate that a significant bias exists and it is more prominent at the top position of the News Feed. In addition, we found that the bias is non-negligible even for users that are deliberately set as neutral with respect to their political views.
The advent of WWW changed the way we can produce and access information. Recent studies showed that users tend to select information that is consistent with their system of beliefs, forming polarized groups of like-minded people around shared narratives where dissenting information is ignored. In this environment, users cooperate to frame and reinforce their shared narrative making any attempt at debunking inefficient. Such a configuration occurs even in the consumption of news online, and considering that 63% of users access news directly form social media, one hypothesis is that more polarization allows for further spreading of misinformation. Along this path, we focus on the polarization of users around news outlets on Facebook in different European countries (Italy, France, Spain and Germany). First, we compare the pages posting behavior and the users interacting patterns across countries and observe different posting, liking and commenting rates. Second, we explore the tendency of users to interact with different pages (i.e., selective exposure) and the emergence of polarized communities generated around specific pages. Then, we introduce a new metric -- i.e., polarization rank -- to measure polarization of communities for each country. We find that Italy is the most polarized country, followed by France, Germany and lastly Spain. Finally, we present a variation of the Bounded Confidence Model to simulate the emergence of these communities by considering the users engagement and trust on the news. Our findings suggest that trust in information broadcaster plays a pivotal role against polarization of users online.
80 - Aria Rezaei , Jie Gao 2019
A commonly used method to protect user privacy in data collection is to perform randomized perturbation on users real data before collection so that aggregated statistics can still be inferred without endangering secrets held by individuals. In this paper, we take a closer look at the validity of Differential Privacy guarantees, when the sensitive attributes are subject to social influence and contagions. We first show that in the absence of any knowledge about the contagion network, an adversary that tries to predict the real values from perturbed ones, cannot achieve an area under the ROC curve (AUC) above $1-(1-delta)/(1+e^varepsilon)$, if the dataset is perturbed using an $(varepsilon,delta)$-differentially private mechanism. Then, we show that with the knowledge of the contagion network and model, one can do significantly better. We demonstrate that our method passes the performance limit imposed by differential privacy. Our experiments also reveal that nodes with high influence on others are at more risk of revealing their secrets than others. The performance is shown through extensive experiments on synthetic and real-world networks.
Vaccine hesitancy has been recognized as a major global health threat. Having access to any type of information in social media has been suggested as a potential powerful influence factor to hesitancy. Recent studies in other fields than vaccination show that access to a wide amount of content through the Internet without intermediaries resolved into major segregation of the users in polarized groups. Users select the information adhering to theirs system of beliefs and tend to ignore dissenting information. In this paper we assess whether there is polarization in Social Media use in the field of vaccination. We perform a thorough quantitative analysis on Facebook analyzing 2.6M users interacting with 298.018 posts over a time span of seven years and 5 months. We used community detection algorithms to automatically detect the emergent communities from the users activity and to quantify the cohesiveness over time of the communities. Our findings show that content consumption about vaccines is dominated by the echo-chamber effect and that polarization increased over years. Communities emerge from the users consumption habits, i.e. the majority of users only consumes information in favor or against vaccines, not both. The existence of echo-chambers may explain why social-media campaigns providing accurate information may have limited reach, may be effective only in sub-groups and might even foment further polarization of opinions. The introduction of dissenting information into a sub-group is disregarded and can have a backfire effect, further reinforcing the existing opinions within the sub-group.
Nowadays users get informed and shape their opinion through social media. However, the disintermediated access to contents does not guarantee quality of information. Selective exposure and confirmation bias, indeed, have been shown to play a pivotal role in content consumption and information spreading. Users tend to select information adhering (and reinforcing) their worldview and to ignore dissenting information. This pattern elicits the formation of polarized groups -- i.e., echo chambers -- where the interaction with like-minded people might even reinforce polarization. In this work we address news consumption around Brexit in UK on Facebook. In particular, we perform a massive analysis on more than 1 Million users interacting with Brexit related posts from the main news providers between January and July 2016. We show that consumption patterns elicit the emergence of two distinct communities of news outlets. Furthermore, to better characterize inner group dynamics, we introduce a new technique which combines automatic topic extraction and sentiment analysis. We compare how the same topics are presented on posts and the related emotional response on comments finding significant differences in both echo chambers and that polarization influences the perception of topics. Our results provide important insights about the determinants of polarization and evolution of core narratives on online debating.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا