ترغب بنشر مسار تعليمي؟ اضغط هنا

Artificial intelligence for elections: the case of 2019 Argentina primary and presidential election

124   0   0.0 ( 0 )
 نشر من قبل Hernan A. Makse
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

We use a method based on machine learning, big-data analytics, and network theory to process millions of messages posted in Twitter to predict election outcomes. The model has achieved accurate results in the current Argentina primary presidential election on August 11, 2019 by predicting the large difference win of candidate Alberto Fernandez over president Mauricio Macri; a result that none of the traditional pollsters in that country was able to predict, and has led to a major bond market collapse. We apply the model to the upcoming Argentina presidential election on October 27, 2019 yielding the following results: Fernandez 47.5%, Macri 30.9% and third party 21.6%. Our method improves over traditional polling methods which are based on direct interactions with small number of individuals that are plagued by ever declining response rates, currently falling in the low single digits. They provide a reliable polling method that can be applied not only to predict elections but to discover any trend in society, for instance, what people think about climate change, politics or education.

قيم البحث

اقرأ أيضاً

It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-c alled troll accounts were able to manipulate public opinion is still in question. Here we aim to quantify the influence of troll accounts and the impact they had on Twitter by analyzing 152.5 million tweets from 9.9 million users, including 822 troll accounts. The data collected during the US election campaign, contain original troll tweets before they were deleted by Twitter. From these data, we constructed a very large interaction graph; a directed graph of 9.3 million nodes and 169.9 million edges. Recently, Twitter released datasets on the misinformation campaigns of 8,275 state-sponsored accounts linked to Russia, Iran and Venezuela as part of the investigation on the foreign interference in the 2016 US election. These data serve as ground-truth identifier of troll users in our dataset. Using graph analysis techniques we qualify the diffusion cascades of web and media context that have been shared by the troll accounts. We present strong evidence that authentic users were the source of the viral cascades. Although the trolls were participating in the viral cascades, they did not have a leading role in them and only four troll accounts were truly influential.
The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 mi llion users, which contain a link to news outlets. Based on a classification of news outlets curated by www.opensources.co, we find that 25% of these tweets spread either fake or extremely biased news. We characterize the networks of information flow to find the most influential spreaders of fake and traditional news and use causal modeling to uncover how fake news influenced the presidential election. We find that, while top influencers spreading traditional center and left leaning news largely influence the activity of Clinton supporters, this causality is reversed for the fake news: the activity of Trump supporters influences the dynamics of the top fake news spreaders.
Identifying and characterizing disinformation in political discourse on social media is critical to ensure the integrity of elections and democratic processes around the world. Persistent manipulation of social media has resulted in increased concern s regarding the 2020 U.S. Presidential Election, due to its potential to influence individual opinions and social dynamics. In this work, we focus on the identification of distorted facts, in the form of unreliable and conspiratorial narratives in election-related tweets, to characterize discourse manipulation prior to the election. We apply a detection model to separate factual from unreliable (or conspiratorial) claims analyzing a dataset of 242 million election-related tweets. The identified claims are used to investigate targeted topics of disinformation, and conspiracy groups, most notably the far-right QAnon conspiracy group. Further, we characterize account engagements with unreliable and conspiracy tweets, and with the QAnon conspiracy group, by political leaning and tweet types. Finally, using a regression discontinuity design, we investigate whether Twitters actions to curb QAnon activity on the platform were effective, and how QAnon accounts adapt to Twitters restrictions.
There is currently no easy way to fact-check content on WhatsApp and other end-to-end encrypted platforms at scale. In this paper, we analyze the usefulness of a crowd-sourced tipline through which users can submit content (tips) that they want fact- checked. We compare the tips sent to a WhatsApp tipline run during the 2019 Indian national elections with the messages circulating in large, public groups on WhatsApp and other social media platforms during the same period. We find that tiplines are a very useful lens into WhatsApp conversations: a significant fraction of messages and images sent to the tipline match with the content being shared on public WhatsApp groups and other social media. Our analysis also shows that tiplines cover the most popular content well, and a majority of such content is often shared to the tipline before appearing in large, public WhatsApp groups. Overall, our findings suggest tiplines can be an effective source for discovering content to fact-check.
Today, an estimated 75% of the British public access information about politics and public life online, and 40% do so via social media. With this context in mind, we investigate information sharing patterns over social media in the lead-up to the 201 9 UK General Elections, and ask: (1) What type of political news and information were social media users sharing on Twitter ahead of the vote? (2) How much of it is extremist, sensationalist, or conspiratorial junk news? (3) How much public engagement did these sites get on Facebook in the weeks leading and (4) What are the most common narratives and themes relayed by junk news outlets
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا