ترغب بنشر مسار تعليمي؟ اضغط هنا

A Web Infrastructure for Certifying Multimedia News Content for Fake News Defense

107   0   0.0 ( 0 )
 نشر من قبل Changchun Zou
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In dealing with altered visual multimedia content, also referred to as fake news, we present a ready-to-deploy extension of the current public key infrastructure (PKI), to provide an endorsement and integrity check platform for newsworthy visual multimedia content. PKI, which is primarily used for Web domain authentication, can directly be utilized with any visual multimedia file. Unlike many other fake news researches that focus on technical multimedia data processing and verification, we enable various news organizations to use our developed program to certify/endorse a multimedia news content when they believe this news piece is truthiness and newsworthy. Our program digitally signs the multimedia news content with the news organizations private key, and the endorsed news content can be posted not only by the endorser, but also by any other websites. By installing a web browser extension developed by us, an end user can easily verify whether a multimedia news content has been endorsed and by which organization. During verification, our browser extension will present to the end user a floating logo next to the image or video. This logo, in the shape of a shield, will show whether the image has been endorsed, by which news organization, and a few more pieces of essential text information of the news multimedia content. The proposed system can be easily integrated to other closed-web system such as social media networks and easily applied to other non-visual multimedia files.



قيم البحث

اقرأ أيضاً

With the rise of social media, it has become easier to disseminate fake news faster and cheaper, compared to traditional news media, such as television and newspapers. Recently this phenomenon has attracted lot of public attention, because it is caus ing significant social and financial impacts on their lives and businesses. Fake news are responsible for creating false, deceptive, misleading, and suspicious information that can greatly effect the outcome of an event. This paper presents a synopsis that explains what are fake news with examples and also discusses some of the current machine learning techniques, specifically natural language processing (NLP) and deep learning, for automatically predicting and detecting fake news. Based on this synopsis, we recommend that there is a potential of using NLP and deep learning to improve automatic detection of fake news, but with the right set of data and features.
The topic of fake news has drawn attention both from the public and the academic communities. Such misinformation has the potential of affecting public opinion, providing an opportunity for malicious parties to manipulate the outcomes of public event s such as elections. Because such high stakes are at play, automatically detecting fake news is an important, yet challenging problem that is not yet well understood. Nevertheless, there are three generally agreed upon characteristics of fake news: the text of an article, the user response it receives, and the source users promoting it. Existing work has largely focused on tailoring solutions to one particular characteristic which has limited their success and generality. In this work, we propose a model that combines all three characteristics for a more accurate and automated prediction. Specifically, we incorporate the behavior of both parties, users and articles, and the group behavior of users who propagate fake news. Motivated by the three characteristics, we propose a model called CSI which is composed of three modules: Capture, Score, and Integrate. The first module is based on the response and text; it uses a Recurrent Neural Network to capture the temporal pattern of user activity on a given article. The second module learns the source characteristic based on the behavior of users, and the two are integrated with the third module to classify an article as fake or not. Experimental analysis on real-world data demonstrates that CSI achieves higher accuracy than existing models, and extracts meaningful latent representations of both users and articles.
In early January 2020, after China reported the first cases of the new coronavirus (SARS-CoV-2) in the city of Wuhan, unreliable and not fully accurate information has started spreading faster than the virus itself. Alongside this pandemic, people ha ve experienced a parallel infodemic, i.e., an overabundance of information, some of which misleading or even harmful, that has widely spread around the globe. Although Social Media are increasingly being used as information source, Web Search Engines, like Google or Yahoo!, still represent a powerful and trustworthy resource for finding information on the Web. This is due to their capability to capture the largest amount of information, helping users quickly identify the most relevant, useful, although not always the most reliable, results for their search queries. This study aims to detect potential misleading and fake contents by capturing and analysing textual information, which flow through Search Engines. By using a real-world dataset associated with recent CoViD-19 pandemic, we first apply re-sampling techniques for class imbalance, then we use existing Machine Learning algorithms for classification of not reliable news. By extracting lexical and host-based features of associated Uniform Resource Locators (URLs) for news articles, we show that the proposed methods, so common in phishing and malicious URLs detection, can improve the efficiency and performance of classifiers. Based on these findings, we suggest that the use of both textual and URLs features can improve the effectiveness of fake news detection methods.
Fake news can significantly misinform people who often rely on online sources and social media for their information. Current research on fake news detection has mostly focused on analyzing fake news content and how it propagates on a network of user s. In this paper, we emphasize the detection of fake news by assessing its credibility. By analyzing public fake news data, we show that information on news sources (and authors) can be a strong indicator of credibility. Our findings suggest that an authors history of association with fake news, and the number of authors of a news article, can play a significant role in detecting fake news. Our approach can help improve traditional fake news detection methods, wherein content features are often used to detect fake news.
Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definit ion of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا