ترغب بنشر مسار تعليمي؟ اضغط هنا

QuasarNET: Human-level spectral classification and redshifting with Deep Neural Networks

55   0   0.0 ( 0 )
 نشر من قبل Nicolas Busca
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce QuasarNET, a deep convolutional neural network that performs classification and redshift estimation of astrophysical spectra with human-expert accuracy. We pose these two tasks as a emph{feature detection} problem: presence or absence of spectral features determines the class, and their wavelength determines the redshift, very much like human-experts proceed. When ran on BOSS data to identify quasars through their emission lines, QuasarNET defines a sample $99.51pm0.03$% pure and $99.52pm0.03$% complete, well above the requirements of many analyses using these data. QuasarNET significantly reduces the problem of line-confusion that induces catastrophic redshift failures to below 0.2%. We also extend QuasarNET to classify spectra with broad absorption line (BAL) features, achieving an accuracy of $98.0pm0.4$% for recognizing BAL and $97.0pm0.2$% for rejecting non-BAL quasars. QuasarNET is trained on data of low signal-to-noise and medium resolution, typical of current and future astrophysical surveys, and could be easily applied to classify spectra from current and upcoming surveys such as eBOSS, DESI and 4MOST.



قيم البحث

اقرأ أيضاً

We describe the redmonster automated redshift measurement and spectral classification software designed for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV). We describe the algorithms, the tem plate standard and requirements, and the newly developed galaxy templates to be used on eBOSS spectra. We present results from testing on early data from eBOSS, where we have found a 90.5% automated redshift and spectral classification success rate for the luminous red galaxy sample (redshifts 0.6$lesssim z lesssim$1.0). The redmonster performance meets the eBOSS cosmology requirements for redshift classification and catastrophic failures, and represents a significant improvement over the previous pipeline. We describe the empirical processes used to determine the optimum number of additive polynomial terms in our models and an acceptable $Deltachi_r^2$ threshold for declaring statistical confidence. Statistical errors on redshift measurement due to photon shot noise are assessed, and we find typical values of a few tens of km s$^{-1}$. An investigation of redshift differences in repeat observations scaled by error estimates yields a distribution with a Gaussian mean and standard deviation of $musim$0.01 and $sigmasim$0.65, respectively, suggesting the reported statistical redshift uncertainties are over-estimated by $sim$54%. We assess the effects of object magnitude, signal-to-noise ratio, fiber number, and fiber head location on the pipelines redshift success rate. Finally, we describe directions of ongoing development.
Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SN eIa and investigating their detailed characteristics have become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires multi-epoch observations and complex luminance measurements. In this work, we present a novel method for classifying SNeIa simply from single-epoch observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Our method first builds a convolutional neural network for estimating the luminance of supernovae from telescope images, and then constructs another neural network for the classification, where the estimated luminance and observation dates are used as features for classification. Both of the neural networks are integrated into a single deep neural network to classify SNeIa directly from observation images. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with multi-epoch observations.
76 - Andrew S. Leung 2015
We present a Bayesian approach to the redshift classification of emission-line galaxies when only a single emission line is detected spectroscopically. We consider the case of surveys for high-redshift Lyman-alpha-emitting galaxies (LAEs), which have traditionally been classified via an inferred rest-frame equivalent width (EW) greater than 20 angstrom. Our Bayesian method relies on known prior probabilities in measured emission-line luminosity functions and equivalent width distributions for the galaxy populations, and returns the probability that an object in question is an LAE given the characteristics observed. This approach will be directly relevant for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX), which seeks to classify ~10^6 emission-line galaxies into LAEs and low-redshift [O II] emitters. For a simulated HETDEX catalog with realistic measurement noise, our Bayesian method recovers 86% of LAEs missed by the traditional EW > 20 angstrom cutoff over 2 < z < 3, outperforming the EW cut in both contamination and incompleteness. This is due to the methods ability to trade off between the two types of binary classification error by adjusting the stringency of the probability requirement for classifying an observed object as an LAE. In our simulations of HETDEX, this method reduces the uncertainty in cosmological distance measurements by 14% with respect to the EW cut, equivalent to recovering 29% more cosmological information. Rather than using binary object labels, this method enables the use of classification probabilities in large-scale structure analyses. It can be applied to narrowband emission-line surveys as well as upcoming large spectroscopic surveys including Euclid and WFIRST.
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230000 targets using the Anglo-Australian Telescope. To homogenise the redshift measurements and improve the reliability, a fully automatic redshift code was developed (autoz). T he measurements were made using a cross-correlation method for both absorption-line and emission-line spectra. Large deviations in the high-pass filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches onto a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km/s.
We present a novel method of classifying Type Ia supernovae using convolutional neural networks, a neural network framework typically used for image recognition. Our model is trained on photometric information only, eliminating the need for accurate redshift data. Photometric data is pre-processed via 2D Gaussian process regression into two-dimensional images created from flux values at each location in wavelength-time space. These flux heatmaps of each supernova detection, along with uncertainty heatmaps of the Gaussian process uncertainty, constitute the dataset for our model. This preprocessing step not only smooths over irregular sampling rates between filters but also allows SCONE to be independent of the filter set on which it was trained. Our model has achieved impressive performance without redshift on the in-distribution SNIa classification problem: $99.73 pm 0.26$% test accuracy with no over/underfitting on a subset of supernovae from PLAsTiCCs unblinded test dataset. We have also achieved $98.18 pm 0.3$% test accuracy performing 6-way classification of supernovae by type. The out-of-distribution performance does not fully match the in-distribution results, suggesting that the detailed characteristics of the training sample in comparison to the test sample have a big impact on the performance. We discuss the implication and directions for future work. All of the data processing and model code developed for this paper can be found in the SCONE software package located at github.com/helenqu/scone.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا