No Arabic abstract
With recent developments in imaging and computer technology the amount of available astronomical data has increased dramatically. Although most of these data sets are not dedicated to the study of variable stars much of it can, with the application of proper software tools, be recycled for the discovery of new variable stars. Fits Viewer and Data Retrieval System is a new software package that takes advantage of modern computer advances to search astronomical data for new variable stars. More than 200 new variable stars have been found in a data set taken with the Calvin College Rehoboth Robotic telescope using FVDRS. One particularly interesting example is a very fast subdwarf B with a 95 minute orbital period, the fastest currently known of the HW Vir type.
We present a machine learning package for the classification of periodic variable stars. Our package is intended to be general: it can classify any single band optical light curve comprising at least a few tens of observations covering durations from weeks to years, with arbitrary time sampling. We use light curves of periodic variable stars taken from OGLE and EROS-2 to train the model. To make our classifier relatively survey-independent, it is trained on 16 features extracted from the light curves (e.g. period, skewness, Fourier amplitude ratio). The model classifies light curves into one of seven superclasses - Delta Scuti, RR Lyrae, Cepheid, Type II Cepheid, eclipsing binary, long-period variable, non-variable - as well as subclasses of these, such as ab, c, d, and e types for RR Lyraes. When trained to give only superclasses, our model achieves 0.98 for both recall and precision as measured on an independent validation dataset (on a scale of 0 to 1). When trained to give subclasses, it achieves 0.81 for both recall and precision. In order to assess classification performance of the subclass model, we applied it to the MACHO, LINEAR, and ASAS periodic variables, which gave recall/precision of 0.92/0.98, 0.89/0.96, and 0.84/0.88, respectively. We also applied the subclass model to Hipparcos periodic variable stars of many other variability types that do not exist in our training set, in order to examine how much those types degrade the classification performance of our target classes. In addition, we investigate how the performance varies with the number of data points and duration of observations. We find that recall and precision do not vary significantly if the number of data points is larger than 80 and the duration is more than a few weeks. The classifier software of the subclass model is available from the GitHub repository (https://goo.gl/xmFO6Q).
We present a novel automated methodology to detect and classify periodic variable stars in a large database of photometric time series. The methods are based on multivariate Bayesian statistics and use a multi-stage approach. We applied our method to the ground-based data of the TrES Lyr1 field, which is also observed by the Kepler satellite, covering ~26000 stars. We found many eclipsing binaries as well as classical non-radial pulsators, such as slowly pulsating B stars, Gamma Doradus, Beta Cephei and Delta Scuti stars. Also a few classical radial pulsators were found.
We describe a methodology to classify periodic variable stars identified using photometric time-series measurements constructed from the Wide-field Infrared Survey Explorer (WISE) full-mission single-exposure Source Databases. This will assist in the future construction of a WISE Variable Source Database that assigns variables to specific science classes as constrained by the WISE observing cadence with statistically meaningful classification probabilities. We have analyzed the WISE light curves of 8273 variable stars identified in previous optical variability surveys (MACHO, GCVS, and ASAS) and show that Fourier decomposition techniques can be extended into the mid-IR to assist with their classification. Combined with other periodic light-curve features, this sample is then used to train a machine-learned classifier based on the random forest (RF) method. Consistent with previous classification studies of variable stars in general, the RF machine-learned classifier is superior to other methods in terms of accuracy, robustness against outliers, and relative immunity to features that carry little or redundant class information. For the three most common classes identified by WISE: Algols, RR Lyrae, and W Ursae Majoris type variables, we obtain classification efficiencies of 80.7%, 82.7%, and 84.5% respectively using cross-validation analyses, with 95% confidence intervals of approximately +/-2%. These accuracies are achieved at purity (or reliability) levels of 88.5%, 96.2%, and 87.8% respectively, similar to that achieved in previous automated classification studies of periodic variable stars.
Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null-hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. Uncorrected systematic errors limit the practical applicability of this approach to high-amplitude variability and well-behaving data sets. Searching for a new variability detection technique that would be applicable to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties, we propose to consider variability detection as a classification problem that can be approached with machine learning. We compare several classification algorithms: Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (kNN) Neural Nets (NN), Random Forests (RF) and Stochastic Gradient Boosting classifier (SGB) applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of OGLE-II Large Magellanic Cloud (LMC) photometry (30265 light curves) that was searched for variability using traditional methods (168 known variable objects identified) as the training set and then apply the NN to a new test set of 31798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, 13 low-amplitude variables are new discoveries. We find that the considered machine learning classifiers are more efficient (they find more variables and less false candidates) compared to traditional techniques that consider individual variability indices or their linear combination. The NN, SGB, SVM and RF show a higher efficiency compared to LR and kNN.
Common variable star classifiers are built only with the goal of producing the correct class labels, leaving much of the multi-task capability of deep neural networks unexplored. We present a periodic light curve classifier that combines a recurrent neural network autoencoder for unsupervised feature extraction and a dual-purpose estimation network for supervised classification and novelty detection. The estimation network optimizes a Gaussian mixture model in the reduced-dimension feature space, where each Gaussian component corresponds to a variable class. An estimation network with a basic structure of a single hidden layer attains a cross-validation classification accuracy of ~99%, on par with the conventional workhorses, random forest classifiers. With the addition of photometric features, the network is capable of detecting previously unseen types of variability with precision 0.90, recall 0.96, and an F1 score of 0.93. The simultaneous training of the autoencoder and estimation network is found to be mutually beneficial, resulting in faster autoencoder convergence, and superior classification and novelty detection performance. The estimation network also delivers adequate results even when optimized with pre-trained autoencoder features, suggesting that it can readily extend existing classifiers to provide added novelty detection capabilities.