No Arabic abstract
The ESAs X-ray Multi-Mirror Mission (XMM-Newton) created a new, high quality version of the XMM-Newton serendipitous source catalogue, 4XMM-DR9, which provides a wealth of information for observed sources. The 4XMM-DR9 catalogue is correlated with the Sloan Digital Sky Survey (SDSS) DR12 photometric database and the ALLWISE database, then we get the X-ray sources with information from X-ray, optical and/or infrared bands, and obtain the XMM-WISE sample, the XMM-SDSS sample and the XMM-WISE-SDSS sample. Based on the large spectroscopic surveys of SDSS and the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST), we cross-match the XMM-WISE-SDSS sample with those sources of known spectral classes, and obtain the known samples of stars, galaxies and quasars. The distribution of stars, galaxies and quasars as well as all spectral classes of stars in 2-d parameter spaces is presented. Various machine learning methods are applied on different samples from different bands. The better classified results are retained. For the sample from X-ray band, rotation forest classifier performs the best. For the sample from X-ray and infrared bands, a random forest algorithm outperforms all other methods. For the samples from X-ray, optical and/or infrared bands, LogitBoost classifier shows its superiority. Thus, all X-ray sources in the 4XMM-DR9 catalogue with different input patterns are classified by their respective models which are created by these best methods. Their membership and membership probabilities to individual X-ray sources are assigned. The classified result will be of great value for the further research of X-ray sources in greater detail.
Deterministic nanoassembly may enable unique integrated on-chip quantum photonic devices. Such integration requires a careful large-scale selection of nanoscale building blocks such as solid-state single-photon emitters by the means of optical characterization. Second-order autocorrelation is a cornerstone measurement that is particularly time-consuming to realize on a large scale. We have implemented supervised machine learning-based classification of quantum emitters as single or not-single based on their sparse autocorrelation data. Our method yields a classification accuracy of over 90% within an integration time of less than a second, realizing roughly a hundredfold speedup compared to the conventional, Levenberg-Marquardt approach. We anticipate that machine learning-based classification will provide a unique route to enable rapid and scalable assembly of quantum nanophotonic devices and can be directly extended to other quantum optical measurements.
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques fitting parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieves an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Ongoing or upcoming surveys such as Gaia, ZTF, or LSST will observe light-curves of billons or more astronomical sources. This presents new challenges for identifying interesting and important types of variability. Collecting a sufficient number of labelled data for training is difficult, however, especially in the early stages of a new survey. Here we develop a single-band light-curve classifier based on deep neural networks, and use transfer learning to address the training data paucity problem by conveying knowledge from one dataset to another. First we train a neural network on 16 variability features extracted from the light-curves of OGLE and EROS-2 variables. We then optimize this model using a small set (e.g. 5%) of periodic variable light-curves from the ASAS dataset in order to transfer knowledge inferred from OGLE/EROS-2 to a new ASAS classifier. With this we achieve good classification results on ASAS, thereby showing that knowledge can be successfully transferred between datasets. We demonstrate similar transfer learning using Hipparcos and ASAS-SN data. We therefore find that it is not necessary to train a neural network from scratch for every new survey, but rather that transfer learning can be used even when only a small set of labelled data is available in the new survey.
We show that multiple machine learning algorithms can match human performance in classifying transient imaging data from the Sloan Digital Sky Survey (SDSS) supernova survey into real objects and artefacts. This is a first step in any transient science pipeline and is currently still done by humans, but future surveys such as the Large Synoptic Survey Telescope (LSST) will necessitate fully machine-enabled solutions. Using features trained from eigenimage analysis (principal component analysis, PCA) of single-epoch g, r and i-difference images, we can reach a completeness (recall) of 96 per cent, while only incorrectly classifying at most 18 per cent of artefacts as real objects, corresponding to a precision (purity) of 84 per cent. In general, random forests performed best, followed by the k-nearest neighbour and the SkyNet artificial neural net algorithms, compared to other methods such as naive Bayes and kernel support vector machine. Our results show that PCA-based machine learning can match human success levels and can naturally be extended by including multiple epochs of data, transient colours and host galaxy information which should allow for significant further improvements, especially at low signal-to-noise.
The advancement of technology has resulted in a rapid increase in supernova (SN) discoveries. The Subaru/Hyper Suprime-Cam (HSC) transient survey, conducted from fall 2016 through spring 2017, yielded 1824 SN candidates. This gave rise to the need for fast type classification for spectroscopic follow-up and prompted us to develop a machine learning algorithm using a deep neural network (DNN) with highway layers. This machine is trained by actual observed cadence and filter combinations such that we can directly input the observed data array into the machine without any interpretation. We tested our model with a dataset from the LSST classification challenge (Deep Drilling Field). Our classifier scores an area under the curve (AUC) of 0.996 for binary classification (SN Ia or non-SN Ia) and 95.3% accuracy for three-class classification (SN Ia, SN Ibc, or SN II). Application of our binary classification to HSC transient data yields an AUC score of 0.925. With two weeks of HSC data since the first detection, this classifier achieves 78.1% accuracy for binary classification, and the accuracy increases to 84.2% with the full dataset. This paper discusses the potential use of machine learning for SN type classification purposes.