Do you want to publish a course? Click here

An Empirical Assessment of the Qualitative Aspects of Misinformation in Health News

تقييم تجريبي للجوانب النوعية من المعلومات الخاطئة في الأخبار الصحية

91   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

The explosion of online health news articles runs the risk of the proliferation of low-quality information. Within the existing work on fact-checking, however, relatively little attention has been paid to medical news. We present a health news classification task to determine whether medical news articles satisfy a set of review criteria deemed important by medical experts and health care journalists. We present a dataset of 1,119 health news paired with systematic reviews. The review criteria consist of six elements that are essential to the accuracy of medical news. We then present experiments comparing the classical token-based approach with the more recent transformer-based models. Our results show that detecting qualitative lapses is a challenging task with direct ramifications in misinformation, but is an important direction to pursue beyond assigning True or False labels to short claims.

References used
https://aclanthology.org/
rate research

Read More

In this paper, we introduce UnifiedM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news, a nd verifying rumors. By grouping these tasks together, UnifiedM2 learns a richer representation of misinformation, which leads to state-of-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UnifiedM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and the model's generalizability to unseen events.
Irrespective of the success of the deep learning-based mixed-domain transfer learning approach for solving various Natural Language Processing tasks, it does not lend a generalizable solution for detecting misinformation from COVID-19 social media da ta. Due to the inherent complexity of this type of data, caused by its dynamic (context evolves rapidly), nuanced (misinformation types are often ambiguous), and diverse (skewed, fine-grained, and overlapping categories) nature, it is imperative for an effective model to capture both the local and global context of the target domain. By conducting a systematic investigation, we show that: (i) the deep Transformer-based pre-trained models, utilized via the mixed-domain transfer learning, are only good at capturing the local context, thus exhibits poor generalization, and (ii) a combination of shallow network-based domain-specific models and convolutional neural networks can efficiently extract local as well as global context directly from the target data in a hierarchical fashion, enabling it to offer a more generalizable solution.
This study was conducted in all health centers that provide vaccine services of Latakia city, on a sample of all the nurses who provide the vaccination for children in health centers , their number (27) nurse, and all the parents of children audito rs health center to receive the vaccine their number (270) family. Data collected from October 2013 to March 2014, according to note form to evaluatethe performanceofnursesat administrating vaccines , and aquestionnaire to assess information of nurses, and a questionnaire to assess parents' satisfaction with the nurses' performance.This study aimed to assess the quality of nurses' performance in conducting of vaccination at health centers in Latakiacity.The most important results of this study that the quality of performance for most nurses was average in conducting vaccination. The study also showed that the majority of nurses have a weak level of information about vaccines. but the parents were deeply dissatisfied with nurses' performance in conducting vaccination.This study was conducted in all health centers that provide vaccine services of Latakia city, on a sample of all the nurses who provide the vaccination for children in health centers , their number (27) nurse, and all the parents of children auditors health center to receive the vaccine their number (270) family. Data collected from October 2013 to March 2014, according to note form to evaluatethe performanceofnursesat administrating vaccines , and aquestionnaire to assess information of nurses, and a questionnaire to assess parents' satisfaction with the nurses' performance.This study aimed to assess the quality of nurses' performance in conducting of vaccination at health centers in Latakiacity.The most important results of this study that the quality of performance for most nurses was average in conducting vaccination. The study also showed that the majority of nurses have a weak level of information about vaccines. but the parents were deeply dissatisfied with nurses' performance in conducting vaccination.
The spread of COVID-19 has been accompanied with widespread misinformation on social media. In particular, Twitterverse has seen a huge increase in dissemination of distorted facts and figures. The present work aims at identifying tweets regarding CO VID-19 which contains harmful and false information. We have experimented with a number of Deep Learning-based models, including different word embeddings, such as Glove, ELMo, among others. BERTweet model achieved the best overall F1-score of 0.881 and secured the third rank on the above task.
In this paper we introduce ArCOV19-Rumors, an Arabic COVID-19 Twitter dataset for misinformation detection composed of tweets containing claims from 27th January till the end of April 2020. We collected 138 verified claims, mostly from popular fact-c hecking websites, and identified 9.4K relevant tweets to those claims. Tweets were manually-annotated by veracity to support research on misinformation detection, which is one of the major problems faced during a pandemic. ArCOV19-Rumors supports two levels of misinformation detection over Twitter: verifying free-text claims (called claim-level verification) and verifying claims expressed in tweets (called tweet-level verification). Our dataset covers, in addition to health, claims related to other topical categories that were influenced by COVID-19, namely, social, politics, sports, entertainment, and religious. Moreover, we present benchmarking results for tweet-level verification on the dataset. We experimented with SOTA models of versatile approaches that either exploit content, user profiles features, temporal features and propagation structure of the conversational threads for tweet verification.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا