ترغب بنشر مسار تعليمي؟ اضغط هنا

Classification of Planetary Nebulae through Deep Transfer Learning

127   0   0.0 ( 0 )
 نشر من قبل Albert Zijlstra
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This study investigate the effectiveness of using Deep Learning (DL) for the classification of planetary nebulae (PNe). It focusses on distinguishing PNe from other types of objects, as well as their morphological classification. We adopted the deep transfer learning approach using three ImageNet pre-trained algorithms. This study was conducted using images from the Hong Kong/Australian Astronomical Observatory/Strasbourg Observatory H-alpha Planetary Nebula research platform database (HASH DB) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). We found that the algorithm has high success in distinguishing True PNe from other types of objects even without any parameter tuning. The Matthews correlation coefficient is 0.9. Our analysis shows that DenseNet201 is the most effective DL algorithm. For the morphological classification, we found for three classes, Bipolar, Elliptical and Round, half of objects are correctly classified. Further improvement may require more data and/or training. We discuss the trade-offs and potential avenues for future work and conclude that deep transfer learning can be utilized to classify wide-field astronomical images.



قيم البحث

اقرأ أيضاً

Ongoing or upcoming surveys such as Gaia, ZTF, or LSST will observe light-curves of billons or more astronomical sources. This presents new challenges for identifying interesting and important types of variability. Collecting a sufficient number of l abelled data for training is difficult, however, especially in the early stages of a new survey. Here we develop a single-band light-curve classifier based on deep neural networks, and use transfer learning to address the training data paucity problem by conveying knowledge from one dataset to another. First we train a neural network on 16 variability features extracted from the light-curves of OGLE and EROS-2 variables. We then optimize this model using a small set (e.g. 5%) of periodic variable light-curves from the ASAS dataset in order to transfer knowledge inferred from OGLE/EROS-2 to a new ASAS classifier. With this we achieve good classification results on ASAS, thereby showing that knowledge can be successfully transferred between datasets. We demonstrate similar transfer learning using Hipparcos and ASAS-SN data. We therefore find that it is not necessary to train a neural network from scratch for every new survey, but rather that transfer learning can be used even when only a small set of labelled data is available in the new survey.
In real-world applications, it is often expensive and time-consuming to obtain labeled examples. In such cases, knowledge transfer from related domains, where labels are abundant, could greatly reduce the need for extensive labeling efforts. In this scenario, transfer learning comes in hand. In this paper, we propose Deep Variational Transfer (DVT), a variational autoencoder that transfers knowledge across domains using a shared latent Gaussian mixture model. Thanks to the combination of a semi-supervised ELBO and parameters sharing across domains, we are able to simultaneously: (i) align all supervised examples of the same class into the same latent Gaussian Mixture component, independently from their domain; (ii) predict the class of unsupervised examples from different domains and use them to better model the occurring shifts. We perform tests on MNIST and USPS digits datasets, showing DVTs ability to perform transfer learning across heterogeneous datasets. Additionally, we present DVTs top classification performances on the MNIST semi-supervised learning challenge. We further validate DVT on a astronomical datasets. DVT achieves states-of-the-art classification performances, transferring knowledge across real stars surveys datasets, EROS, MACHO and HiTS, . In the worst performance, we double the achieved F1-score for rare classes. These experiments show DVTs ability to tackle all major challenges posed by transfer learning: different covariate distributions, different and highly imbalanced class distributions and different feature spaces.
We develop an approximate analytical solution for the transfer of line-averaged radiation in the hydrogen recombination lines for the ionized cavity and molecular shell of a spherically symmetric planetary nebula. The scattering problem is treated as a perturbation, using a mean intensity derived from a scattering-free solution. The analytical function was fitted to Halpha and Hbeta data from the planetary nebula NGC6537. The position of the maximum in the intensity profile produced consistent values for the radius of the cavity as a fraction of the radius of the dusty nebula: 0.21 for Halpha and 0.20 for Hbeta. Recovered optical depths were broadly consistent with observed optical extinction in the nebula, but the range of fit parameters in this case is evidence for a clumpy distribution of dust.
70 - A. Manchado 2000
We present a statistical analysis of a complete sample (255) of northern planetary nebulae (PNe). Our analysis is based on morphology as a main parameter. The major morphological classes are: round (26 % of the sample), elliptical (61 %), and bip olar (13 %) PNe. About a half of the round and 30 % of the elliptical PNe present multiple shells. Round PNe have higher galactic latitude (| b| = 12) and galactic height (<z> = 753 pc), than the elliptical (|b| = 7,<z> = 308 pc) and bipolar (|b| = 3, <z>=179 pc). This possibly implies a different progenitor mass range across morphology, as a different stellar population would suggest.
We present SNIascore, a deep-learning based method for spectroscopic classification of thermonuclear supernovae (SNe Ia) based on very low-resolution (R $sim100$) data. The goal of SNIascore is fully automated classification of SNe Ia with a very low false-positive rate (FPR) so that human intervention can be greatly reduced in large-scale SN classification efforts, such as that undertaken by the public Zwicky Transient Facility (ZTF) Bright Transient Survey (BTS). We utilize a recurrent neural network (RNN) architecture with a combination of bidirectional long short-term memory and gated recurrent unit layers. SNIascore achieves a $<0.6%$ FPR while classifying up to $90%$ of the low-resolution SN Ia spectra obtained by the BTS. SNIascore simultaneously performs binary classification and predicts the redshifts of secure SNe Ia via regression (with a typical uncertainty of $<0.005$ in the range from $z = 0.01$ to $z = 0.12$). For the magnitude-limited ZTF BTS survey ($approx70%$ SNe Ia), deploying SNIascore reduces the amount of spectra in need of human classification or confirmation by $approx60%$. Furthermore, SNIascore allows SN Ia classifications to be automatically announced in real-time to the public immediately following a finished observation during the night.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا