ترغب بنشر مسار تعليمي؟ اضغط هنا

Automated reliability assessment for spectroscopic redshift measurements

62   0   0.0 ( 0 )
 نشر من قبل Sara Jamal
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function (PDF). We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process, and produce a redshift posterior PDF that will be the starting-point for ML algorithms to provide an automated assessment of a redshift reliability. As a use case, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification to describe different types of redshift PDFs, but due to the subjective definition of these flags, soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions, unlabelled data from preliminary mock simulations for the Euclid space mission are projected into this mapping to predict their redshift reliability labels.



قيم البحث

اقرأ أيضاً

Determining the radial positions of galaxies up to a high accuracy depends on the correct identification of salient features in their spectra. Classical techniques for spectroscopic redshift estimation make use of template matching with cross-correla tion. These templates are usually constructed from empirical spectra or simulations based on the modeling of local galaxies. We propose two new spectroscopic redshift estimation schemes based on new learning techniques for galaxy spectra representation, using either a dictionary learning technique for sparse representation or denoising autoencoders. We investigate how these representations impact redshift estimation. These methods have been tested on realistic simulated galaxy spectra, with photometry modelled after the Large Synoptic Survey Telescope (LSST) and spectroscopy reproducing properties of the Sloan Digital Sky Survey (SDSS). They were compared to Darth Fader, a robust technique extracting line features and estimating redshift through eigentemplates cross-correlations. We show that both dictionary learning and denoising autoencoders provide improved accuracy and reliability across all signal-to-noise regimes and galaxy types. The representation learning framework for spectroscopic redshift analysis introduced in this work offers high performance in feature extraction and redshift estimation, improving on a classical eigentemplates approach. This is a necessity for next-generation galaxy surveys, and we demonstrate a successful application in realistic simulated survey data.
196 - Oleksandra Razim 2021
In order to answer the open questions of modern cosmology and galaxy evolution theory, robust algorithms for calculating photometric redshifts (photo-z) for very large samples of galaxies are needed. Correct estimation of the various photo-z algorith ms performance requires attention to both the performance metrics and the data used for the estimation. In this work, we use the supervised machine learning algorithm MLPQNA to calculate photometric redshifts for the galaxies in the COSMOS2015 catalogue and the unsupervised Self-Organizing Maps (SOM) to determine the reliability of the resulting estimates. We find that for spec-z<1.2, photo-z predictions are on the same level of quality as SED fitting photo-z. We show that the SOM successfully detects unreliable spec-z that cause biases in the estimation of the photo-z algorithms performance. Additionally, we use SOM to select the objects with reliable photo-z predictions. Our cleaning procedures allow to extract the subset of objects for which the quality of the final photo-z catalogs is improved by a factor of two, compared to the overall statistics.
The advent of a new generation of Adaptive Optics systems called Wide Field AO (WFAO) mark the beginning of a new era. By using multiple Guide Stars (GSs), either Laser Guide Stars (LGSs) or Natural Guide Stars (NGSs), WFAO significantly increases th e field of view of the AO-corrected images, and the fraction of the sky that can benefit from such correction. Different typologies of WFAO have been studied over the past years. They all require multiple GSs to perform a tomographic analysis of the atmospheric turbulence. One of the fundamental aspects of the new WFAO systems is the knowledge of the spatio-temporal distribution of the turbulence above the telescope. One way to get to this information is to use the telemetry data provided by the WFAO system itself. Indeed, it has been demonstrated that WFAO systems allows one to derive the Cn2 and wind profile in the main turbulence layers (see e.g. Cortes et al. 2012). This method has the evident advantage to provide information on the turbulence stratification that effectively affects the AO system, property more difficultly respected by independently vertical profilers. In this paper, we compare the wind speeds profiles of GeMS with those predicted by a non-hydrostatical mesoscale atmospherical model (Meso-NH). It has been proved (Masciadri et al., 2013), indeed, that this model is able to provide reliable wind speed profiles on the whole troposphere and stratosphere (up to 20-25 km) above top-level astronomical sites. Correlation with measurements revealed to be very satisfactory when the model performances are analyzed from a statistical point of view as well on individual nights. Such a system appears therefore as an interesting reference to be used to quantify the GeMS wind speed profiles reliability.
Logs are semi-structured text generated by logging statements in software source code. In recent decades, software logs have become imperative in the reliability assurance mechanism of many software systems because they are often the only data availa ble that record software runtime information. As modern software is evolving into a large scale, the volume of logs has increased rapidly. To enable effective and efficient usage of modern software logs in reliability engineering, a number of studies have been conducted on automated log analysis. This survey presents a detailed overview of automated log analysis research, including how to automate and assist the writing of logging statements, how to compress logs, how to parse logs into structured event templates, and how to employ logs to detect anomalies, predict failures, and facilitate diagnosis. Additionally, we survey work that releases open-source toolkits and datasets. Based on the discussion of the recent advances, we present several promising future directions toward real-world and next-generation automated log analysis.
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inef ficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and potential vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا