ترغب بنشر مسار تعليمي؟ اضغط هنا

Understanding the Use of Fauxtography on Social Media

69   0   0.0 ( 0 )
 نشر من قبل Emiliano De Cristofaro
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation.



قيم البحث

اقرأ أيضاً

Web archiving services play an increasingly important role in todays information ecosystem, by ensuring the continuing availability of information, or by deliberately caching content that might get deleted or removed. Among these, the Wayback Machine has been proactively archiving, since 200
329 - Lu Cheng , Ruocheng Guo , Kai Shu 2020
Recent years have witnessed remarkable progress towards computational fake news detection. To mitigate its negative impact, we argue that it is critical to understand what user attributes potentially cause users to share fake news. The key to this ca usal-inference problem is to identify confounders -- variables that cause spurious associations between treatments (e.g., user attributes) and outcome (e.g., user susceptibility). In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities. Learning such user behavior is typically subject to selection bias in users who are susceptible to share news on social media. Drawing on causal inference theories, we first propose a principled approach to alleviating selection bias in fake news dissemination. We then consider the learned unbiased fake news sharing behavior as the surrogate confounder that can fully capture the causal links between user attributes and user susceptibility. We theoretically and empirically characterize the effectiveness of the proposed approach and find that it could be useful in protecting society from the perils of fake news.
Social media provides many opportunities to monitor and evaluate political phenomena such as referendums and elections. In this study, we propose a set of approaches to analyze long-running political events on social media with a real-world experimen t: the debate about Brexit, i.e., the process through which the United Kingdom activated the option of leaving the European Union. We address the following research questions: Could Twitter-based stance classification be used to demonstrate public stance with respect to political events? What is the most efficient and comprehensive approach to measuring the impact of politicians on social media? Which of the polarized sides of the debate is more responsive to politician messages and the main issues of the Brexit process? What is the share of bot accounts in the Brexit discussion and which side are they for? By combining the user stance classification, topic discovery, sentiment analysis, and bot detection, we show that it is possible to obtain useful insights about political phenomena from social media data. We are able to detect relevant topics in the discussions, such as the demand for a new referendum, and to understand the position of social media users with respect to the different topics in the debate. Our comparative and temporal analysis of political accounts can detect the critical periods of the Brexit process and the impact they have on the debate.
The ever-increasing amount of information flowing through Social Media forces the members of these networks to compete for attention and influence by relying on other people to spread their message. A large study of information propagation within Twi tter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We also explicitly demonstrate that high popularity does not necessarily imply high influence and vice-versa.
252 - Giancarlo Ruffo 2021
The history of journalism and news diffusion is tightly coupled with the effort to dispel hoaxes, misinformation, propaganda, unverified rumours, poor reporting, and messages containing hate and divisions. With the explosive growth of online social m edia and billions of individuals engaged with consuming, creating, and sharing news, this ancient problem has surfaced with a renewed intensity threatening our democracies, public health, and news outlets credibility. This has triggered many researchers to develop new methods for studying, understanding, detecting, and preventing fake-news diffusion; as a consequence, thousands of scientific papers have been published in a relatively short period, making researchers of different disciplines to struggle in search of open problems and most relevant trends. The aim of this survey is threefold: first, we want to provide the researchers interested in this multidisciplinary and challenging area with a network-based analysis of the existing literature to assist them with a visual exploration of papers that can be of interest; second, we present a selection of the main results achieved so far adopting the network as an unifying framework to represent and make sense of data, to model diffusion processes, and to evaluate different debunking strategies. Finally, we present an outline of the most relevant research trends focusing on the moving target of fake-news, bots, and trolls identification by means of data mining and text technologies; despite scholars working on computational linguistics and networks traditionally belong to different scientific communities, we expect that forthcoming computational approaches to prevent fake news from polluting the social media must be developed using hybrid and up-to-date methodologies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا