No Arabic abstract
Theoretical stellar spectra rely on model stellar atmospheres computed based on our understanding of the physical laws at play in the stellar interiors. These models, coupled with atomic and molecular line databases, are used to generate theoretical stellar spectral libraries (SSLs) comprising of stellar spectra over a regular grid of atmospheric parameters (temperature, surface gravity, abundances) at any desired resolution. Another class of SSLs is referred to as empirical spectral libraries; these contain observed spectra at limited resolution. SSLs play an essential role in deriving the properties of stars and stellar populations. Both theoretical and empirical libraries suffer from limited coverage over the parameter space. This limitation is overcome to some extent by generating spectra for specific sets of atmospheric parameters by interpolating within the grid of available parameter space. In this work, we present a method for spectral interpolation in the optical region using machine learning algorithms that are generic, easily adaptable for any SSL without much change in the model parameters, and computationally inexpensive. We use two machine learning techniques, Random Forest (RF) and Artificial Neural Networks (ANN), and train the models on the MILES library. We apply the trained models to spectra from the CFLIB for testing and show that the performance of the two models is comparable. We show that both the models achieve better accuracy than the existing methods of polynomial based interpolation and the Gaussian radial basis function (RBF) interpolation.
The perplexing mystery of what maintains the solar coronal temperature at about a million K, while the visible disc of the Sun is only at 5800 K, has been a long standing problem in solar physics. A recent study by Mondal(2020) has provided the first evidence for the presence of numerous ubiquitous impulsive emissions at low radio frequencies from the quiet sun regions, which could hold the key to solving this mystery. These features occur at rates of about five hundred events per minute, and their strength is only a few percent of the background steady emission. One of the next steps for exploring the feasibility of this resolution to the coronal heating problem is to understand the morphology of these emissions. To meet this objective we have developed a technique based on an unsupervised machine learning approach for characterising the morphology of these impulsive emissions. Here we present the results of application of this technique to over 8000 images spanning 70 minutes of data in which about 34,500 features could robustly be characterised as 2D elliptical Gaussians.
We used a convolutional neural network to infer stellar rotation periods from a set of synthetic light curves simulated with realistic spot evolution patterns. We convolved these simulated light curves with real TESS light curves containing minimal intrinsic astrophysical variability to allow the network to learn TESS systematics and estimate rotation periods despite them. In addition to periods, we predict uncertainties via heteroskedastic regression to estimate the credibility of the period predictions. In the most credible half of the test data, we recover 10%-accurate periods for 46% of the targets, and 20%-accurate periods for 69% of the targets. Using our trained network, we successfully recover periods of real stars with literature rotation measurements, even past the 13.7-day limit generally encountered by TESS rotation searches using conventional period-finding techniques. Our method also demonstrates resistance to half-period aliases. We present the neural network and simulated training data, and introduce the software butterpy used to synthesize the light curves using realistic star spot evolution.
We introduce a new machine learning based technique to detect exoplanets using the transit method. Machine learning and deep learning techniques have proven to be broadly applicable in various scientific research areas. We aim to exploit some of these methods to improve the conventional algorithm based approaches presently used in astrophysics to detect exoplanets. Using the time-series analysis library TSFresh to analyse light curves, we extracted 789 features from each curve, which capture the information about the characteristics of a light curve. We then used these features to train a gradient boosting classifier using the machine learning tool lightgbm. This approach was tested on simulated data, which showed that is more effective than the conventional box least squares fitting (BLS) method. We further found that our method produced comparable results to existing state-of-the-art deep learning models, while being much more computationally efficient and without needing folded and secondary views of the light curves. For Kepler data, the method is able to predict a planet with an AUC of 0.948, so that 94.8 per cent of the true planet signals are ranked higher than non-planet signals. The resulting recall is 0.96, so that 96 per cent of real planets are classified as planets. For the Transiting Exoplanet Survey Satellite (TESS) data, we found our method can classify light curves with an accuracy of 0.98, and is able to identify planets with a recall of 0.82 at a precision of 0.63.
A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are > $10^9$ photometrically cataloged sources, yet modern spectroscopic surveys are limited to ~few x $10^6$ targets. As we approach the Large Synoptic Survey Telescope (LSST) era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (Teff, log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/MMT. In sum, the training set includes ~9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts Teff, log g, and [Fe/H] from photometric time-domain observations. Our final, optimized model produces a cross-validated root-mean-square error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for Teff, log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ~12-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ~54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history.
In the era of vast spectroscopic surveys focusing on Galactic stellar populations, astronomers want to exploit the large quantity and good quality of data to derive their atmospheric parameters without losing precision from automatic procedures. In this work, we developed a new spectral package, FASMA, to estimate the stellar atmospheric parameters (namely effective temperature, surface gravity, and metallicity) in a fast and robust way. This method is suitable for spectra of FGK-type stars in medium and high resolution. The spectroscopic analysis is based on the spectral synthesis technique using the radiative transfer code, MOOG. The line list is comprised of mainly iron lines in the optical spectrum. The atomic data are calibrated after the Sun and Arcturus. We use two comparison samples to test our method, i) a sample of 451 FGK-type dwarfs from the high resolution HARPS spectrograph, and ii) the Gaia-ESO benchmark stars using both high and medium resolution spectra. We explore biases in our method from the analysis of synthetic spectra covering the parameter space of our interest. We show that our spectral package is able to provide reliable results for a wide range of stellar parameters, different rotational velocities, different instrumental resolutions, and for different spectral regions of the VLT-GIRAFFE spectrographs, used among others for the Gaia-ESO survey. FASMA estimates stellar parameters in less than 15 min for high resolution and 3 min for medium resolution spectra. The complete package is publicly available to the community.