ترغب بنشر مسار تعليمي؟ اضغط هنا

Rebutting fake news on full spectral fitting

59   0   0.0 ( 0 )
 نشر من قبل Roberto Cid Fernandes
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A recent paper by Ge et al. performs a series of experiments with two full spectral fitting codes, pPXF and starlight, finding that the two yield consistent results when the input spectrum is not heavily reddened. For E(B-V) > 0.2, however, they claim starlight leads to severe biases in the derived properties. Counterintuitively, and at odds with previous simulations, they find that this behaviour worsens significantly as the signal-to-noise ratio of the input spectrum increases. This communication shows that this is entirely due to an A_V < 1 mag condition imposed while initializing the Markov chains in the code. This choice is normally irrelevant in real-life galaxy work but can become critical in artificial experiments. Alleviating this usually harmless initialization constraint changes the Ge et al. results completely, as was explained to the authors before their publication. We replicate their spectral fitting experiments, finding much smaller biases. Furthermore both bias and scatter in the derived properties all converge as S/N increases, as one would expect. We also show how the very output of the code provides ways of diagnosing anomalies in the fits. The code behaviour has been documented in careful and extensive experiments in the literature, but the biased analysis of Ge et al. is just not representative of starlight at all.

قيم البحث

اقرأ أيضاً

MaNGA (Mapping Nearby Galaxies at Apache Point Observatory) is a 6-year SDSS-IV survey that will obtain resolved spectroscopy from 3600 $AA$ to 10300 $AA$ for a representative sample of over 10,000 nearby galaxies. In this paper, we derive spatially resolved stellar population properties and radial gradients by performing full spectral fitting of observed galaxy spectra from P-MaNGA, a prototype of the MaNGA instrument. These data include spectra for eighteen galaxies, covering a large range of morphological type. We derive age, metallicity, dust and stellar mass maps, and their radial gradients, using high spectral-resolution stellar population models, and assess the impact of varying the stellar library input to the models. We introduce a method to determine dust extinction which is able to give smooth stellar mass maps even in cases of high and spatially non-uniform dust attenuation. With the spectral fitting we produce detailed maps of stellar population properties which allow us to identify galactic features among this diverse sample such as spiral structure, smooth radial profiles with little azimuthal structure in spheroidal galaxies, and spatially distinct galaxy sub-components. In agreement with the literature, we find the gradients for galaxies identified as early-type to be on average flat in age, and negative (- 0.15 dex / R$_e$ ) in metallicity, whereas the gradients for late-type galaxies are on average negative in age (- 0.39 dex / R$_e$ ) and flat in metallicity. We demonstrate how different levels of data quality change the precision with which radial gradients can be measured. We show how this analysis, extended to the large numbers of MaNGA galaxies, will have the potential to shed light on galaxy structure and evolution.
Disinformation through fake news is an ongoing problem in our society and has become easily spread through social media. The most cost and time effective way to filter these large amounts of data is to use a combination of human and technical interve ntions to identify it. From a technical perspective, Natural Language Processing (NLP) is widely used in detecting fake news. Social media companies use NLP techniques to identify the fake news and warn their users, but fake news may still slip through undetected. It is especially a problem in more localised contexts (outside the United States of America). How do we adjust fake news detection systems to work better for local contexts such as in South Africa. In this work we investigate fake news detection on South African websites. We curate a dataset of South African fake news and then train detection models. We contrast this with using widely available fake news datasets (from mostly USA website). We also explore making the datasets more diverse by combining them and observe the differences in behaviour in writing between nations fake news using interpretable machine learning.
Amid the pandemic COVID-19, the world is facing unprecedented infodemic with the proliferation of both fake and real information. Considering the problematic consequences that the COVID-19 fake-news have brought, the scientific community has put effo rt to tackle it. To contribute to this fight against the infodemic, we aim to achieve a robust model for the COVID-19 fake-news detection task proposed at CONSTRAINT 2021 (FakeNews-19) by taking two separate approaches: 1) fine-tuning transformers based language models with robust loss functions and 2) removing harmful training instances through influence calculation. We further evaluate the robustness of our models by evaluating on different COVID-19 misinformation test set (Tweets-19) to understand model generalization ability. With the first approach, we achieve 98.13% for weighted F1 score (W-F1) for the shared task, whereas 38.18% W-F1 on the Tweets-19 highest. On the contrary, by performing influence data cleansing, our model with 99% cleansing percentage can achieve 54.33% W-F1 score on Tweets-19 with a trade-off. By evaluating our models on two COVID-19 fake-news test sets, we suggest the importance of model generalization ability in this task to step forward to tackle the COVID-19 fake-news problem in online social media platforms.
Fake news can significantly misinform people who often rely on online sources and social media for their information. Current research on fake news detection has mostly focused on analyzing fake news content and how it propagates on a network of user s. In this paper, we emphasize the detection of fake news by assessing its credibility. By analyzing public fake news data, we show that information on news sources (and authors) can be a strong indicator of credibility. Our findings suggest that an authors history of association with fake news, and the number of authors of a news article, can play a significant role in detecting fake news. Our approach can help improve traditional fake news detection methods, wherein content features are often used to detect fake news.
Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definit ion of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا