Do you want to publish a course? Click here

Deep Transfer Learning for Classification of Variable Sources

265   0   0.0 ( 0 )
 Added by Dae-Won Kim
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Ongoing or upcoming surveys such as Gaia, ZTF, or LSST will observe light-curves of billons or more astronomical sources. This presents new challenges for identifying interesting and important types of variability. Collecting a sufficient number of labelled data for training is difficult, however, especially in the early stages of a new survey. Here we develop a single-band light-curve classifier based on deep neural networks, and use transfer learning to address the training data paucity problem by conveying knowledge from one dataset to another. First we train a neural network on 16 variability features extracted from the light-curves of OGLE and EROS-2 variables. We then optimize this model using a small set (e.g. 5%) of periodic variable light-curves from the ASAS dataset in order to transfer knowledge inferred from OGLE/EROS-2 to a new ASAS classifier. With this we achieve good classification results on ASAS, thereby showing that knowledge can be successfully transferred between datasets. We demonstrate similar transfer learning using Hipparcos and ASAS-SN data. We therefore find that it is not necessary to train a neural network from scratch for every new survey, but rather that transfer learning can be used even when only a small set of labelled data is available in the new survey.



rate research

Read More

This study investigate the effectiveness of using Deep Learning (DL) for the classification of planetary nebulae (PNe). It focusses on distinguishing PNe from other types of objects, as well as their morphological classification. We adopted the deep transfer learning approach using three ImageNet pre-trained algorithms. This study was conducted using images from the Hong Kong/Australian Astronomical Observatory/Strasbourg Observatory H-alpha Planetary Nebula research platform database (HASH DB) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). We found that the algorithm has high success in distinguishing True PNe from other types of objects even without any parameter tuning. The Matthews correlation coefficient is 0.9. Our analysis shows that DenseNet201 is the most effective DL algorithm. For the morphological classification, we found for three classes, Bipolar, Elliptical and Round, half of objects are correctly classified. Further improvement may require more data and/or training. We discuss the trade-offs and potential avenues for future work and conclude that deep transfer learning can be utilized to classify wide-field astronomical images.
The accurate automated classification of variable stars into their respective sub-types is difficult. Machine learning based solutions often fall foul of the imbalanced learning problem, which causes poor generalisation performance in practice, especially on rare variable star sub-types. In previous work, we attempted to overcome such deficiencies via the development of a hierarchical machine learning classifier. This algorithm-level approach to tackling imbalance, yielded promising results on Catalina Real-Time Survey (CRTS) data, outperforming the binary and multi-class classification schemes previously applied in this area. In this work, we attempt to further improve hierarchical classification performance by applying data-level approaches to directly augment the training data so that they better describe under-represented classes. We apply and report results for three data augmentation methods in particular: $textit{R}$andomly $textit{A}$ugmented $textit{S}$ampled $textit{L}$ight curves from magnitude $textit{E}$rror ($texttt{RASLE}$), augmenting light curves with Gaussian Process modelling ($texttt{GpFit}$) and the Synthetic Minority Over-sampling Technique ($texttt{SMOTE}$). When combining the algorithm-level (i.e. the hierarchical scheme) together with the data-level approach, we further improve variable star classification accuracy by 1-4$%$. We found that a higher classification rate is obtained when using $texttt{GpFit}$ in the hierarchical model. Further improvement of the metric scores requires a better standard set of correctly identified variable stars and, perhaps enhanced features are needed.
71 - Yanxia Zhang , Yongheng Zhao , 2021
The ESAs X-ray Multi-Mirror Mission (XMM-Newton) created a new, high quality version of the XMM-Newton serendipitous source catalogue, 4XMM-DR9, which provides a wealth of information for observed sources. The 4XMM-DR9 catalogue is correlated with the Sloan Digital Sky Survey (SDSS) DR12 photometric database and the ALLWISE database, then we get the X-ray sources with information from X-ray, optical and/or infrared bands, and obtain the XMM-WISE sample, the XMM-SDSS sample and the XMM-WISE-SDSS sample. Based on the large spectroscopic surveys of SDSS and the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST), we cross-match the XMM-WISE-SDSS sample with those sources of known spectral classes, and obtain the known samples of stars, galaxies and quasars. The distribution of stars, galaxies and quasars as well as all spectral classes of stars in 2-d parameter spaces is presented. Various machine learning methods are applied on different samples from different bands. The better classified results are retained. For the sample from X-ray band, rotation forest classifier performs the best. For the sample from X-ray and infrared bands, a random forest algorithm outperforms all other methods. For the samples from X-ray, optical and/or infrared bands, LogitBoost classifier shows its superiority. Thus, all X-ray sources in the 4XMM-DR9 catalogue with different input patterns are classified by their respective models which are created by these best methods. Their membership and membership probabilities to individual X-ray sources are assigned. The classified result will be of great value for the further research of X-ray sources in greater detail.
The need for the development of automatic tools to explore astronomical databases has been recognized since the inception of CCDs and modern computers. Astronomers already have developed solutions to tackle several science problems, such as automatic classification of stellar objects, outlier detection, and globular clusters identification, among others. New science problems emerge and it is critical to be able to re-use the models learned before, without rebuilding everything from the beginning when the science problem changes. In this paper, we propose a new meta-model that automatically integrates existing classification models of variable stars. The proposed meta-model incorporates existing models that are trained in a different context, answering different questions and using different representations of data. Conventional mixture of experts algorithms in machine learning literature can not be used since each expert (model) uses different inputs. We also consider computational complexity of the model by using the most expensive models only when it is necessary. We test our model with EROS-2 and MACHO datasets, and we show that we solve most of the classification challenges only by training a meta-model to learn how to integrate the previous experts.
The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale, annotated training data. However, there is a paucity of annotated data available due to the complexity of manual annotation. To overcome this problem, a popular approach is to use transferable knowledge across different domains by: 1) using a generic feature extractor that has been pre-trained on large-scale general images (i.e., transfer-learned) but which not suited to capture characteristics from medical images; or 2) fine-tuning generic knowledge with a relatively smaller number of annotated images. Our aim is to reduce the reliance on annotated training data by using a new hierarchical unsupervised feature extractor with a convolutional auto-encoder placed atop of a pre-trained convolutional neural network. Our approach constrains the rich and generic image features from the pre-trained domain to a sophisticated representation of the local image characteristics from the unannotated medical image domain. Our approach has a higher classification accuracy than transfer-learned approaches and is competitive with state-of-the-art supervised fine-tuned methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا