Do you want to publish a course? Click here

Automatically applying a credibility appraisal tool to track vaccination-related communications shared on social media

53   0   0.0 ( 0 )
 Added by Zubair Shah
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Background: Tools used to appraise the credibility of health information are time-consuming to apply and require context-specific expertise, limiting their use for quickly identifying and mitigating the spread of misinformation as it emerges. Our aim was to estimate the proportion of vaccination-related posts on Twitter are likely to be misinformation, and how unevenly exposure to misinformation was distributed among Twitter users. Methods: Sampling from 144,878 vaccination-related web pages shared on Twitter between January 2017 and March 2018, we used a seven-point checklist adapted from two validated tools to appraise the credibility of a small subset of 474. These were used to train several classifiers (random forest, support vector machines, and a recurrent neural network with transfer learning), using the text from a web page to predict whether the information satisfies each of the seven criteria. Results: Applying the best performing classifier to the 144,878 web pages, we found that 14.4% of relevant posts to text-based communications were linked to webpages of low credibility and made up 9.2% of all potential vaccination-related exposures. However, the 100 most popular links to misinformation were potentially seen by between 2 million and 80 million Twitter users, and for a substantial sub-population of Twitter users engaging with vaccination-related information, links to misinformation appear to dominate the vaccination-related information to which they were exposed. Conclusions: We proposed a new method for automatically appraising the credibility of webpages based on a combination of validated checklist tools. The results suggest that an automatic credibility appraisal tool can be used to find populations at higher risk of exposure to misinformation or applied proactively to add friction to the sharing of low credibility vaccination information.



rate research

Read More

Working adults spend nearly one third of their daily time at their jobs. In this paper, we study job-related social media discourse from a community of users. We use both crowdsourcing and local expertise to train a classifier to detect job-related messages on Twitter. Additionally, we analyze the linguistic differences in a job-related corpus of tweets between individual users vs. commercial accounts. The volumes of job-related tweets from individual users indicate that people use Twitter with distinct monthly, daily, and hourly patterns. We further show that the moods associated with jobs, positive and negative, have unique diurnal rhythms.
During the COVID-19 pandemic, people started to discuss about pandemic-related topics on social media. On subreddit textit{r/COVID19positive}, a number of topics are discussed or being shared, including experience of those who got a positive test result, stories of those who presumably got infected, and questions asked regarding the pandemic and the disease. In this study, we try to understand, from a linguistic perspective, the nature of discussions on the subreddit. We found differences in linguistic characteristics (e.g. psychological, emotional and reasoning) across three different categories of topics. We also classified posts into the different categories using SOTA pre-trained language models. Such classification model can be used for pandemic-related research on social media.
COVID-19 pandemic has generated what public health officials called an infodemic of misinformation. As social distancing and stay-at-home orders came into effect, many turned to social media for socializing. This increase in social media usage has made it a prime vehicle for the spreading of misinformation. This paper presents a mechanism to detect COVID-19 health-related misinformation in social media following an interdisciplinary approach. Leveraging social psychology as a foundation and existing misinformation frameworks, we defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques. Next, using the Twitter dataset, we explored the performance of the proposed methodology using multiple state-of-the-art machine learning classifiers. Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information using uni-gram-based NLP feature generations from tweets and the Decision Tree classifier. We also provide suggestions on alternatives for countering misinformation and ethical consideration for the study.
Time-critical analysis of social media streams is important for humanitarian organizations for planing rapid response during disasters. The textit{crisis informatics} research community has developed several techniques and systems for processing and classifying big crisis-related data posted on social media. However, due to the dispersed nature of the datasets used in the literature (e.g., for training models), it is not possible to compare the results and measure the progress made towards building better models for crisis informatics tasks. In this work, we attempt to bridge this gap by combining various existing crisis-related datasets. We consolidate eight human-annotated datasets and provide 166.1k and 141.5k tweets for textit{informativeness} and textit{humanitarian} classification tasks, respectively. We believe that the consolidated dataset will help train more sophisticated models. Moreover, we provide benchmarks for both binary and multiclass classification tasks using several deep learning architecrures including, CNN, fastText, and transformers. We make the dataset and scripts available at: https://crisisnlp.qcri.org/crisis_datasets_benchmarks.html
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا