ترغب بنشر مسار تعليمي؟ اضغط هنا

Determining the Veracity of Rumours on Twitter

139   0   0.0 ( 0 )
 نشر من قبل Georgios Giasemidis Dr
 تاريخ النشر 2016
والبحث باللغة English




اسأل ChatGPT حول البحث

While social networks can provide an ideal platform for up-to-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.



قيم البحث

اقرأ أيضاً

Recent work in the domain of misinformation detection has leveraged rich signals in the text and user identities associated with content on social media. But text can be strategically manipulated and accounts reopened under different aliases, suggest ing that these approaches are inherently brittle. In this work, we investigate an alternative modality that is naturally robust: the pattern in which information propagates. Can the veracity of an unverified rumor spreading online be discerned solely on the basis of its pattern of diffusion through the social network? Using graph kernels to extract complex topological information from Twitter cascade structures, we train accurate predictive models that are blind to language, user identities, and time, demonstrating for the first time that such sanitized diffusion patterns are highly informative of veracity. Our results indicate that, with proper aggregation, the collective sharing pattern of the crowd may reveal powerful signals of rumor truth or falsehood, even in the early stages of propagation.
We study the relationship between the sentiment levels of Twitter users and the evolving network structure that the users created by @-mentioning each other. We use a large dataset of tweets to which we apply three sentiment scoring algorithms, inclu ding the open source SentiStrength program. Specifically we make three contributions. Firstly we find that people who have potentially the largest communication reach (according to a dynamic centrality measure) use sentiment differently than the average user: for example they use positive sentiment more often and negative sentiment less often. Secondly we find that when we follow structurally stable Twitter communities over a period of months, their sentiment levels are also stable, and sudden changes in community sentiment from one day to the next can in most cases be traced to external events affecting the community. Thirdly, based on our findings, we create and calibrate a simple agent-based model that is capable of reproducing measures of emotive response comparable to those obtained from our empirical dataset.
The role of social media in opinion formation has far-reaching implications in all spheres of society. Though social media provide platforms for expressing news and views, it is hard to control the quality of posts due to the sheer volumes of posts o n platforms like Twitter and Facebook. Misinformation and rumours have lasting effects on society, as they tend to influence peoples opinions and also may motivate people to act irrationally. It is therefore very important to detect and remove rumours from these platforms. The only way to prevent the spread of rumours is through automatic detection and classification of social media posts. Our focus in this paper is the Twitter social medium, as it is relatively easy to collect data from Twitter. The majority of previous studies used supervised learning approaches to classify rumours on Twitter. These approaches rely on feature extraction to obtain both content and context features from the text of tweets to distinguish rumours and non-rumours. Manually extracting features however is time-consuming considering the volume of tweets. We propose a novel approach to deal with this problem by utilising sentence embedding using BERT to identify rumours on Twitter, rather than the usual feature extraction techniques. We use sentence embedding using BERT to represent each tweets sentences into a vector according to the contextual meaning of the tweet. We classify those vectors into rumours or non-rumours by using various supervised learning techniques. Our BERT based models improved the accuracy by approximately 10% as compared to previous methods.
Background. In Italy, in recent years, vaccination coverage for key immunizations as MMR has been declining to worryingly low levels. In 2017, the Italian Govt expanded the number of mandatory immunizations introducing penalties to unvaccinated child rens families. During the 2018 general elections campaign, immunization policy entered the political debate with the Govt in charge blaming oppositions for fuelling vaccine scepticism. A new Govt established in 2018 temporarily relaxed penalties. Objectives and Methods. Using a sentiment analysis on tweets posted in Italian during 2018, we aimed to: (i) characterize the temporal flow of vaccines communication on Twitter (ii) evaluate the polarity of vaccination opinions and usefulness of Twitter data to estimate vaccination parameters, and (iii) investigate whether the contrasting announcements at the highest political level might have originated disorientation amongst the Italian public. Results. Vaccine-relevant tweeters interactions peaked in response to main political events. Out of retained tweets, 70.0% resulted favourable to vaccination, 16.5% unfavourable, and 13.6% undecided, respectively. The smoothed time series of polarity proportions exhibit frequent large changes in the favourable proportion, enhanced by an up and down trend synchronized with the switch between govt suggesting evidence of disorientation among the public. Conclusion. The reported evidence of disorientation documents that critical immunization topics, should never be used for political consensus. This is especially true given the increasing role of online social media as information source, which might yield to social pressures eventually harmful for vaccine uptake, and is worsened by the lack of institutional presence on Twitter, calling for efforts to contrast misinformation and the ensuing spread of hesitancy.
Social media is a rich source of rumours and corresponding community reactions. Rumours reflect different characteristics, some shared and some individual. We formulate the problem of classifying tweet level judgements of rumours as a supervised lear ning task. Both supervised and unsupervised domain adaptation are considered, in which tweets from a rumour are classified on the basis of other annotated rumours. We demonstrate how multi-task learning helps achieve good results on rumours from the 2011 England riots.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا