No Arabic abstract
Stars exhibit a bewildering variety of variable behaviors ranging from explosive magnetic flares to stochastically changing accretion to periodic pulsations or rotations. The principal LSST surveys will have cadences too sparse and irregular to capture most of these phenomena. A novel idea is proposed here to observe a single Galactic field, rich in unobscured stars, in a continuous sequence of $sim 15$ second exposures for one long winter night in a single photometric band. The result will be a unique dataset of $sim 1$ million regularly spaced stellar lightcurves. The lightcurves will gives a particularly comprehensive collection of dM star variability. A powerful array of statistical procedures can be applied to the ensemble of lightcurves from the long-standing fields of time series analysis, signal processing and econometrics. Dozens of `features describing the variability can be extracted and subject to machine learning classification, giving a unique authoritative objective classification of rapidly variable stars. The most effective features can then inform the wider LSST community on the best approaches to variable star identification and classification from the sparse, irregular cadences that dominate the LSST project.
With recent developments in imaging and computer technology the amount of available astronomical data has increased dramatically. Although most of these data sets are not dedicated to the study of variable stars much of it can, with the application of proper software tools, be recycled for the discovery of new variable stars. Fits Viewer and Data Retrieval System is a new software package that takes advantage of modern computer advances to search astronomical data for new variable stars. More than 200 new variable stars have been found in a data set taken with the Calvin College Rehoboth Robotic telescope using FVDRS. One particularly interesting example is a very fast subdwarf B with a 95 minute orbital period, the fastest currently known of the HW Vir type.
Two upcoming large scale surveys, the ESA Gaia and LSST projects, will bring a new era in astronomy. The number of binary systems that will be observed and detected by these projects is enormous, estimations range from millions for Gaia to several tens of millions for LSST. We review some tools that should be developed and also what can be gained from these missions on the subject of binaries and exoplanets from the astrometry, photometry, radial velocity and their alert systems.
Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null-hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. Uncorrected systematic errors limit the practical applicability of this approach to high-amplitude variability and well-behaving data sets. Searching for a new variability detection technique that would be applicable to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties, we propose to consider variability detection as a classification problem that can be approached with machine learning. We compare several classification algorithms: Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (kNN) Neural Nets (NN), Random Forests (RF) and Stochastic Gradient Boosting classifier (SGB) applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of OGLE-II Large Magellanic Cloud (LMC) photometry (30265 light curves) that was searched for variability using traditional methods (168 known variable objects identified) as the training set and then apply the NN to a new test set of 31798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, 13 low-amplitude variables are new discoveries. We find that the considered machine learning classifiers are more efficient (they find more variables and less false candidates) compared to traditional techniques that consider individual variability indices or their linear combination. The NN, SGB, SVM and RF show a higher efficiency compared to LR and kNN.
Common variable star classifiers are built only with the goal of producing the correct class labels, leaving much of the multi-task capability of deep neural networks unexplored. We present a periodic light curve classifier that combines a recurrent neural network autoencoder for unsupervised feature extraction and a dual-purpose estimation network for supervised classification and novelty detection. The estimation network optimizes a Gaussian mixture model in the reduced-dimension feature space, where each Gaussian component corresponds to a variable class. An estimation network with a basic structure of a single hidden layer attains a cross-validation classification accuracy of ~99%, on par with the conventional workhorses, random forest classifiers. With the addition of photometric features, the network is capable of detecting previously unseen types of variability with precision 0.90, recall 0.96, and an F1 score of 0.93. The simultaneous training of the autoencoder and estimation network is found to be mutually beneficial, resulting in faster autoencoder convergence, and superior classification and novelty detection performance. The estimation network also delivers adequate results even when optimized with pre-trained autoencoder features, suggesting that it can readily extend existing classifiers to provide added novelty detection capabilities.
We present a machine learning package for the classification of periodic variable stars. Our package is intended to be general: it can classify any single band optical light curve comprising at least a few tens of observations covering durations from weeks to years, with arbitrary time sampling. We use light curves of periodic variable stars taken from OGLE and EROS-2 to train the model. To make our classifier relatively survey-independent, it is trained on 16 features extracted from the light curves (e.g. period, skewness, Fourier amplitude ratio). The model classifies light curves into one of seven superclasses - Delta Scuti, RR Lyrae, Cepheid, Type II Cepheid, eclipsing binary, long-period variable, non-variable - as well as subclasses of these, such as ab, c, d, and e types for RR Lyraes. When trained to give only superclasses, our model achieves 0.98 for both recall and precision as measured on an independent validation dataset (on a scale of 0 to 1). When trained to give subclasses, it achieves 0.81 for both recall and precision. In order to assess classification performance of the subclass model, we applied it to the MACHO, LINEAR, and ASAS periodic variables, which gave recall/precision of 0.92/0.98, 0.89/0.96, and 0.84/0.88, respectively. We also applied the subclass model to Hipparcos periodic variable stars of many other variability types that do not exist in our training set, in order to examine how much those types degrade the classification performance of our target classes. In addition, we investigate how the performance varies with the number of data points and duration of observations. We find that recall and precision do not vary significantly if the number of data points is larger than 80 and the duration is more than a few weeks. The classifier software of the subclass model is available from the GitHub repository (https://goo.gl/xmFO6Q).