ترغب بنشر مسار تعليمي؟ اضغط هنا

Supervised Contrastive Learning for Multimodal Unreliable News Detection in COVID-19 Pandemic

88   0   0.0 ( 0 )
 نشر من قبل Wenjia Zhang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As the digital news industry becomes the main channel of information dissemination, the adverse impact of fake news is explosively magnified. The credibility of a news report should not be considered in isolation. Rather, previously published news articles on the similar event could be used to assess the credibility of a news report. Inspired by this, we propose a BERT-based multimodal unreliable news detection framework, which captures both textual and visual information from unreliable articles utilising the contrastive learning strategy. The contrastive learner interacts with the unreliable news classifier to push similar credible news (or similar unreliable news) closer while moving news articles with similar content but opposite credibility labels away from each other in the multimodal embedding space. Experimental results on a COVID-19 related dataset, ReCOVery, show that our model outperforms a number of competitive baseline in unreliable news detection.



قيم البحث

اقرأ أيضاً

Amid the pandemic COVID-19, the world is facing unprecedented infodemic with the proliferation of both fake and real information. Considering the problematic consequences that the COVID-19 fake-news have brought, the scientific community has put effo rt to tackle it. To contribute to this fight against the infodemic, we aim to achieve a robust model for the COVID-19 fake-news detection task proposed at CONSTRAINT 2021 (FakeNews-19) by taking two separate approaches: 1) fine-tuning transformers based language models with robust loss functions and 2) removing harmful training instances through influence calculation. We further evaluate the robustness of our models by evaluating on different COVID-19 misinformation test set (Tweets-19) to understand model generalization ability. With the first approach, we achieve 98.13% for weighted F1 score (W-F1) for the shared task, whereas 38.18% W-F1 on the Tweets-19 highest. On the contrary, by performing influence data cleansing, our model with 99% cleansing percentage can achieve 54.33% W-F1 score on Tweets-19 with a trade-off. By evaluating our models on two COVID-19 fake-news test sets, we suggest the importance of model generalization ability in this task to step forward to tackle the COVID-19 fake-news problem in online social media platforms.
First identified in Wuhan, China, in December 2019, the outbreak of COVID-19 has been declared as a global emergency in January, and a pandemic in March 2020 by the World Health Organization (WHO). Along with this pandemic, we are also experiencing a n infodemic of information with low credibility such as fake news and conspiracies. In this work, we present ReCOVery, a repository designed and constructed to facilitate research on combating such information regarding COVID-19. We first broadly search and investigate ~2,000 news publishers, from which 60 are identified with extreme [high or low] levels of credibility. By inheriting the credibility of the media on which they were published, a total of 2,029 news articles on coronavirus, published from January to May 2020, are collected in the repository, along with 140,820 tweets that reveal how these news articles have spread on the Twitter social network. The repository provides multimodal information of news articles on coronavirus, including textual, visual, temporal, and network information. The way that news credibility is obtained allows a trade-off between dataset scalability and label accuracy. Extensive experiments are conducted to present data statistics and distributions, as well as to provide baseline performances for predicting news credibility so that future methods can be compared. Our repository is available at http://coronavirus-fakenews.com.
The rapid advancement of technology in online communication via social media platforms has led to a prolific rise in the spread of misinformation and fake news. Fake news is especially rampant in the current COVID-19 pandemic, leading to people belie ving in false and potentially harmful claims and stories. Detecting fake news quickly can alleviate the spread of panic, chaos and potential health hazards. We developed a two stage automated pipeline for COVID-19 fake news detection using state of the art machine learning models for natural language processing. The first model leverages a novel fact checking algorithm that retrieves the most relevant facts concerning user claims about particular COVID-19 claims. The second model verifies the level of truth in the claim by computing the textual entailment between the claim and the true facts retrieved from a manually curated COVID-19 dataset. The dataset is based on a publicly available knowledge source consisting of more than 5000 COVID-19 false claims and verified explanations, a subset of which was internally annotated and cross-validated to train and evaluate our models. We evaluate a series of models based on classical text-based features to more contextual Transformer based models and observe that a model pipeline based on BERT and ALBERT for the two stages respectively yields the best results.
122 - Ben Chen , Bin Chen , Dehong Gao 2021
With the pandemic of COVID-19, relevant fake news is spreading all over the sky throughout the social media. Believing in them without discrimination can cause great trouble to peoples life. However, universal language models may perform weakly in th ese fake news detection for lack of large-scale annotated data and sufficient semantic understanding of domain-specific knowledge. While the model trained on corresponding corpora is also mediocre for insufficient learning. In this paper, we propose a novel transformer-based language model fine-tuning approach for these fake news detection. First, the token vocabulary of individual model is expanded for the actual semantics of professional phrases. Second, we adapt the heated-up softmax loss to distinguish the hard-mining samples, which are common for fake news because of the disambiguation of short text. Then, we involve adversarial training to improve the models robustness. Last, the predicted features extracted by universal language model RoBERTa and domain-specific model CT-BERT are fused by one multiple layer perception to integrate fine-grained and high-level specific representations. Quantitative experimental results evaluated on existing COVID-19 fake news dataset show its superior performances compared to the state-of-the-art methods among various evaluation metrics. Furthermore, the best weighted average F1 score achieves 99.02%.
Identifying superspreaders of disease is a pressing concern for society during pandemics such as COVID-19. Superspreaders represent a group of people who have much more social contacts than others. The widespread deployment of WLAN infrastructure ena bles non-invasive contact tracing via peoples ubiquitous mobile devices. This technology offers promise for detecting superspreaders. In this paper, we propose a general framework for WLAN-log-based superspreader detection. In our framework, we first use WLAN logs to construct contact graphs by jointly considering human symmetric and asymmetric interactions. Next, we adopt three vertex centrality measurements over the contact graphs to generate three groups of superspreader candidates. Finally, we leverage SEIR simulation to determine groups of superspreaders among these candidates, who are the most critical individuals for the spread of disease based on the simulation results. We have implemented our framework and evaluate it over a WLAN dataset with 41 million log entries from a large-scale university. Our evaluation shows superspreaders exist on university campuses. They change over the first few weeks of a semester, but stabilize throughout the rest of the term. The data also demonstrate that both symmetric and asymmetric contact tracing can discover superspreaders, but the latter performs better with daily contact graphs. Further, the evaluation shows no consistent differences among three vertex centrality measures for long-term (i.e., weekly) contact graphs, which necessitates the inclusion of SEIR simulation in our framework. We believe our proposed framework and these results may provide timely guidance for public health administrators regarding effective testing, intervention, and vaccination policies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا