ترغب بنشر مسار تعليمي؟ اضغط هنا

VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook

88   0   0.0 ( 0 )
 نشر من قبل Francesco Pierri
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present VaccinItaly, a project which monitors Italian online conversations around vaccines, on Twitter and Facebook. We describe the ongoing data collection, which follows the SARS-CoV-2 vaccination campaign roll-out in Italy and we provide public access to the data collected. We show results from a preliminary analysis of the spread of low- and high-credibility news shared alongside vaccine-related conversations on both social media platforms. We also investigate the content of most popular YouTube videos and encounter several cases of harmful and misleading content about vaccines. Finally, we geolocate Twitter users who discuss vaccines and correlate their activity with open data statistics on vaccine uptake. We make up-to-date results available to the public through an interactive online dashboard associated with the project. The goal of our project is to gain further understanding of the interplay between the public discourse on online social media and the dynamics of vaccine uptake in the real world.



قيم البحث

اقرأ أيضاً

Facebook News Feed personalization algorithm has a significant impact, on a daily basis, on the lifestyle, mood and opinion of millions of Internet users. Nonetheless, the behavior of such algorithms usually lacks transparency, motivating measurement s, modeling and analysis in order to understand and improve its properties. In this paper, we propose a reproducible methodology encompassing measurements and an analytical model to capture the visibility of publishers over a News Feed. First, measurements are used to parameterize and to validate the expressive power of the proposed model. Then, we conduct a what-if analysis to assess the visibility bias incurred by the users against a baseline derived from the model. Our results indicate that a significant bias exists and it is more prominent at the top position of the News Feed. In addition, we found that the bias is non-negligible even for users that are deliberately set as neutral with respect to their political views.
The ongoing Coronavirus (COVID-19) pandemic highlights the inter-connectedness of our present-day globalized world. With social distancing policies in place, virtual communication has become an important source of (mis)information. As increasing numb er of people rely on social media platforms for news, identifying misinformation and uncovering the nature of online discourse around COVID-19 has emerged as a critical task. To this end, we collected streaming data related to COVID-19 using the Twitter API, starting March 1, 2020. We identified unreliable and misleading contents based on fact-checking sources, and examined the narratives promoted in misinformation tweets, along with the distribution of engagements with these tweets. In addition, we provide examples of the spreading patterns of prominent misinformation tweets. The analysis is presented and updated on a publically accessible dashboard (https://usc-melady.github.io/COVID-19-Tweet-Analysis) to track the nature of online discourse and misinformation about COVID-19 on Twitter from March 1 - June 5, 2020. The dashboard provides a daily list of identified misinformation tweets, along with topics, sentiments, and emerging trends in the COVID-19 Twitter discourse. The dashboard is provided to improve visibility into the nature and quality of information shared online, and provide real-time access to insights and information extracted from the dataset.
The Covid-19 pandemic has had a deep impact on the lives of the entire world population, inducing a participated societal debate. As in other contexts, the debate has been the subject of several d/misinformation campaigns; in a quite unprecedented fa shion, however, the presence of false information has seriously put at risk the public health. In this sense, detecting the presence of malicious narratives and identifying the kinds of users that are more prone to spread them represent the first step to limit the persistence of the former ones. In the present paper we analyse the semantic network observed on Twitter during the first Italian lockdown (induced by the hashtags contained in approximately 1.5 millions tweets published between the 23rd of March 2020 and the 23rd of April 2020) and study the extent to which various discursive communities are exposed to d/misinformation arguments. As observed in other studies, the recovered discursive communities largely overlap with traditional political parties, even if the debated topics concern different facets of the management of the pandemic. Although the themes directly related to d/misinformation are a minority of those discussed within our semantic networks, their popularity is unevenly distributed among the various discursive communities.
On social media algorithms for content promotion, accounting for users preferences, might limit the exposure to unsolicited contents. In this work, we study how the same contents (videos) are consumed on different platforms -- i.e. Facebook and YouTu be -- over a sample of $12M$ of users. Our findings show that the same content lead to the formation of echo chambers, irrespective of the online social network and thus of the algorithm for content promotion. Finally, we show that the users commenting patterns are accurate early predictors for the formation of echo-chambers.
Recent studies, targeting Facebook, showed the tendency of users to interact with information adhering to their preferred narrative and to ignore dissenting information. Primarily driven by confirmation bias, users tend to join polarized clusters whe re they cooperate to reinforce a like-minded system of beliefs, thus facilitating fake news and misinformation cascades. To gain a deeper understanding of these phenomena, in this work we analyze the lexicons used by the communities of users emerging on Facebook around verified and unverified contents. We show how the lexical approach provides important insights about the kind of information processed by the two communities of users and about their overall sentiment. Furthermore, by focusing on comment threads, we observe a strong positive correlation between the lexical convergence of co-commenters and their number of interactions, which in turns suggests that such a trend could be a proxy for the emergence of collective identities and polarization in opinion dynamics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا