ترغب بنشر مسار تعليمي؟ اضغط هنا

SNIascore: Deep Learning Classification of Low-Resolution Supernova Spectra

88   0   0.0 ( 0 )
 نشر من قبل Christoffer Fremling Dr.
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present SNIascore, a deep-learning based method for spectroscopic classification of thermonuclear supernovae (SNe Ia) based on very low-resolution (R $sim100$) data. The goal of SNIascore is fully automated classification of SNe Ia with a very low false-positive rate (FPR) so that human intervention can be greatly reduced in large-scale SN classification efforts, such as that undertaken by the public Zwicky Transient Facility (ZTF) Bright Transient Survey (BTS). We utilize a recurrent neural network (RNN) architecture with a combination of bidirectional long short-term memory and gated recurrent unit layers. SNIascore achieves a $<0.6%$ FPR while classifying up to $90%$ of the low-resolution SN Ia spectra obtained by the BTS. SNIascore simultaneously performs binary classification and predicts the redshifts of secure SNe Ia via regression (with a typical uncertainty of $<0.005$ in the range from $z = 0.01$ to $z = 0.12$). For the magnitude-limited ZTF BTS survey ($approx70%$ SNe Ia), deploying SNIascore reduces the amount of spectra in need of human classification or confirmation by $approx60%$. Furthermore, SNIascore allows SN Ia classifications to be automatically announced in real-time to the public immediately following a finished observation during the night.

قيم البحث

اقرأ أيضاً

Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscop ic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques fitting parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieves an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Recent rapid development of deep learning algorithms, which can implicitly capture structures in high-dimensional data, opens a new chapter in astronomical data analysis. We report here a new implementation of deep learning techniques for X-ray analy sis. We apply a variational autoencoder (VAE) using a deep neural network for spatio-spectral analysis of data obtained by Chandra X-ray Observatory from Tychos supernova remnant (SNR). We established an unsupervised learning method combining the VAE and a Gaussian mixture model (GMM), where the dimensions of the observed spectral data are reduced by the VAE, and clustering in feature space is performed by the GMM. We found that some characteristic spatial structures, such as the iron knot on the eastern rim, can be automatically recognised by this method, which uses only spectral properties. This result shows that unsupervised machine learning can be useful for extracting characteristic spatial structures from spectral information in observational data (without detailed spectral analysis), which would reduce human-intensive preprocessing costs for understanding fine structures in diffuse astronomical objects, e.g., SNRs or clusters of galaxies. Such data-driven analysis can be used to select regions from which to extract spectra for detailed analysis and help us make the best use of the large amount of spectral data available currently and arriving in the coming decades.
The full third Gaia data release will provide the calibrated spectra obtained with the blue and red Gaia slit-less spectrophotometers. The main challenge when facing Gaia spectral calibration is that no lamp spectra or flat fields are available durin g the mission. Also, the significant size of the line spread function with respect to the dispersion of the prisms produces alien photons contaminating neighbouring positions of the spectra. This makes the calibration special and different from standard approaches. This work gives a detailed description of the internal calibration model to obtain the spectrophotometric data in the Gaia catalogue. The main purpose of the internal calibration is to bring all the epoch spectra onto a common flux and pixel (pseudo-wavelength) scale, taking into account variations over the focal plane and with time, producing a mean spectrum from all the observations of the same source. In order to describe all observations in a common mean flux and pseudo-wavelength scale, we construct a suitable representation of the internally calibrated mean spectra via basis functions and we describe the transformation between non calibrated epoch spectra and calibrated mean spectra via a discrete convolution, parametrising the convolution kernel to recover the relevant coefficients. The model proposed here is able to combine all observations into a mean instrument to allow the comparison of different sources and observations obtained with different instrumental conditions along the mission and the generation of mean spectra from a number of observations of the same source. The output of this model provides the internal mean spectra, not as a sampled function (flux and wavelength), but as a linear combination of basis functions, although sampled spectra can easily be derived from them.
We demonstrate the application of a convolutional neural network to the gravitational wave signals from core collapse supernovae. Using simulated time series of gravitational wave detectors, we show that based on the explosion mechanisms, a convoluti onal neural network can be used to detect and classify the gravitational wave signals buried in noise. For the waveforms used in the training of the convolutional neural network, our results suggest that a network of advanced LIGO, advanced VIRGO and KAGRA, or a network of LIGO A+, advanced VIRGO and KAGRA is likely to detect a magnetorotational core collapse supernovae within the Large and Small Magellanic Clouds, or a Galactic event if the explosion mechanism is the neutrino-driven mechanism. By testing the convolutional neural network with waveforms not used for training, we show that the true alarm probabilities are 52% and 83% at 60 kpc for waveforms R3E1AC and R4E1FC L. For waveforms s20 and SFHx at 10 kpc, the true alarm probabilities are 70% and 93% respectively. All at false alarm probability equal to 10%.
The advancement of technology has resulted in a rapid increase in supernova (SN) discoveries. The Subaru/Hyper Suprime-Cam (HSC) transient survey, conducted from fall 2016 through spring 2017, yielded 1824 SN candidates. This gave rise to the need fo r fast type classification for spectroscopic follow-up and prompted us to develop a machine learning algorithm using a deep neural network (DNN) with highway layers. This machine is trained by actual observed cadence and filter combinations such that we can directly input the observed data array into the machine without any interpretation. We tested our model with a dataset from the LSST classification challenge (Deep Drilling Field). Our classifier scores an area under the curve (AUC) of 0.996 for binary classification (SN Ia or non-SN Ia) and 95.3% accuracy for three-class classification (SN Ia, SN Ibc, or SN II). Application of our binary classification to HSC transient data yields an AUC score of 0.925. With two weeks of HSC data since the first detection, this classifier achieves 78.1% accuracy for binary classification, and the accuracy increases to 84.2% with the full dataset. This paper discusses the potential use of machine learning for SN type classification purposes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا